At the closing of last year, I presented an in-house talk, comparing Continuous Delivery solutions for both Java and .NET. Of course, this was not a simple side-by-side comparison; a microsoft oriented colleague and myself created a small challenge for ourselves. My colleague was going to build Java software in Team Foundation Server. And me? I was stuck creating a delivery pipeline for Microsoft .NET in Jenkins.
Now, I had been thinking about looking into .NET since the appearance of .NET core, and this gave me all the excuse I needed…
So, this is going to be the first in a series of blog posts journalling my adventures in .NET core. For this first post, Im going to be talking mostly on starting up a simple delivery pipeline using Jenkins. From this basic delivery pipeline, other posts in this series will focus on developing the developing the application, with unit tests, acceptance tests, all the while making sure our delivery pipeline stays up to date.
Well then, let’s begin!
Hello .NET core
So first off, we get started with a new .NET core application; using the .NET core runtime and the CLI tooling.
Behold! The glory of “Hello World” in .NET Core. So, a couple of interesting things to note:
dotnet new
creates a simple, new C# project. Luckily, no VB.NET shenanigans. This basically creates aProgram.cs
file and aproject.json
file. The JSON file is kind of like your Maven pom file or your Gradle build file. Kind of.dotnet restore
downloads all dependencies into the local cache, so you’re good to go. Interesting fact; you always need to rundotnet restore
separately, it is not included as a part of something likedotnet run
ordotnet publish
dotnet run
does two things. Our program is not yet compiled, so it implicitly does a build of our program (dotnet build
). Then, it runs the program, resulting in our much desired ‘Hello World’
For now, this is about all I need. I have a ‘Hello World’ program, I can clone it anywhere, and get it building and running.
So, coolest thing about this all? It’s all native on my Mac. And want to know what’s even better? It works just as well on Linux. And just like that, running and distributing our application got a whole lot easier, as I, for one, hail our Docker overlords.
On Jenkins 2
Now, as I said, I had extra motivation for my experiments to use Jenkins as a build tool. One, it was the topic of the talk. And of course two, it’s just better then TFS or VSTS coughs.
Ok, well, let’s not go into that comparison on here.
At the very least, Jenkins' delivery pipelines are fun to experiment with, and allow us to create a delivery pipeline, as code (as opposed to clicking through a bunch of screens). Last year, while Microsoft was busy developing dotnet core, Cloudbees (the company mostly behind Jenkins) has been busy developing Jenkins 2, and focussing on delivery pipelines.
Though still in development, the delivery pipeline vision from Jenkins has been clear throughout; All build configuration is scripted, in a file called Jenkinsfile
that you keep in git with your software.
There are, at this moment, two ‘flavors’ of Jenkinsfile;
- a pure groovy script. Allows for maximal freedom, but is error prone, and often not as readable
- a declarative script. Syntax is more defined; less chance for errors at the cost of flexibility. Also, the syntax is not quite final yet (beta2 at this moment of writing)
If your
Jenkinsfile
starts with something akin tonode{
you’re looking at a script that allows all kinds of groovy code. You can do anything, but it’s easy to make mistakes. If the file starts withpipeline{
the syntax is much more strictly defined, but more readable. I would advice to use the declarativepipeline{
style for new projects.
For this project, we’re going to use the declarative style. In my opinion, it’s cleaner, and it certainly looks like it’s the direction Cloudbees is heading towards.
Back to the core of things
Now, that’s all nice and well, but what about our awesome hello world application? Well, let’s get started on a delivery pipeline for that. What’s going to be delivery architecture? Our application is going to be packaged into a docker container, and uploaded to docker hub. From there, deployment to production should be easy enough. Let’s begin.
Building the code
So, I created a first stage called ‘Build Binaries’ to begin with. It builds our .NET core binaries.
stage('Build binaries'){
agent { docker 'microsoft/dotnet:latest'}
steps{
git url: 'https://github.com/corstijank/blog-dotnet-jenkins.git'
sh 'dotnet restore'
sh 'dotnet publish project.json -c Release -r ubuntu.14.04-x64 -o ./publish'
stash includes: 'publish/**', name: 'prod_bins'
}
}
Let’s start with the line where declare which agent we are going to use for this stage. We use a docker container as our agent, and base it on the microsoft/dotnet:latest
image. This is a simple line, but this has some pretty neat implications. It basically means, for this pipeline to run, we don’t need to install the .NET core SDK anywhere. All we need is access to a docker host. The pipeline downloads the image, starts a container, and executes the steps of our stage inside the container.
Need to pin a specific version? No problem. Just use a different docker tag. Want to build on a new version of the SDK? Use a different docker tag. Never install a .NET SDK on a build server again. It’s glorious
So, what do we run our container? Basically three things; we clone the git repository, restore the dependencies, and publish our application. This is all pretty basic .NET core CLI stuff.
Lastly, and this is important, we use stash
to copy the resulting binaries out of our container. This basically creates a zip file of everything in the specified directory, and ensures that zipfile is made available on request to later stages in the pipeline under the specified name.
3…2…1…Dockerize
Now that we have our binaries, it’s time to create our docker image. I purposefully put this as a separate stage.
Simply said, the requirements for the steps in this stage, being, access to a Docker daemon, are different than the requirements for the stepsof the previous stage (access to the .NET core sdk). Much like object oriented programming, this is a pretty good pointer to decouple stages in the delivery pipeline.
Let’s look at the pipeline:
stage('Create docker image'){
agent { label 'hasDocker' }
environment {
DOCKER_ID = credentials('docker-id')
}
steps{
// Unstash the binaries from the previous tage
unstash 'prod_bins'
sh """ docker build -t corstijank/blog-dotnet-jenkins:1.0-${env.BUILD_NUMBER} .
docker tag corstijank/blog-dotnet-jenkins:1.0-${env.BUILD_NUMBER} corstijank/blog-dotnet-jenkins:latest
docker login -u ${DOCKER_ID_USR} -p ${DOCKER_ID_PSW}
docker push corstijank/blog-dotnet-jenkins:1.0-${env.BUILD_NUMBER}
docker push corstijank/blog-dotnet-jenkins:latest """
}
}
There’s a bunch of new stuff going on here. Let’s inspect. We run this stage on any Jenkins agent that has a label called ‘hasDocker’. The label is just something I made up; but it’s a nice way of identify if a Jenkins agent comes with access to a Docker daemon or not. Mind you, this stage does not run in in a docker container. It’s just a simple process on the agent executing it.
Also, we ask Jenkins for credentials, under the ID ‘docker-id’. Again, this is an identifier I made up myself. It’s up the Jenkins administrator to create some credentials to a DockerHub id in the Jenkins instance.
From there on, it’s pretty self-explanatory. We unstash (read: unzip) the created binaries. We use docker build to create an image based on a Dockerfile
in our repository. We tag it using both the build number, and a latest tag. We log in to DockerHub with our credentials, and we push our images.
Here’s the Dockerfile;
FROM microsoft/dotnet:runtime
COPY publish /app
WORKDIR /app
RUN ["chmod", "744", "./blog-dotnet-jenkins"]
ENTRYPOINT ["./blog-dotnet-jenkins"]
We use the .NET core runtime here; as we don’t need the full SDK. We add our publish
folder as /app
, and mark the executable file as such. The ENTRYPOINT
points to the executable file, ensuring its execution when starting the container.
If we run this pipeline in our Jenkins instance, we have achieved, a fully runnable docker image!
Success!
F*ck it, ship it
The final stage really isn’t anything special for now:
stage('Run in production'){
agent { label 'hasDocker' }
steps{
sh "docker run -d corstijank/blog-dotnet-jenkins:1.0-${env.BUILD_NUMBER}"
}
}
Of course, this is not anywhere near a satisfying production environment. I promise to extend our pipeline to make sure we deploy nicely to separate docker host representing our production server. Or maybe some container service somewhere. At this point, it’s all Docker anyway, and that should be the least of my struggles in the coming adventure.
Deliver first, develop later
So, maybe I went a bit overboard for a simple Hello World
application. I think many developers tend to focus too much first on developing features, instead of delivering features. The whole idea was to test if a .NET core application could be delivered using Jenkins. Never mind the feature yet. For now, it’s looking rather glorious. Of course there are going to be add ons and challenges later;
- unit testing
- acceptance testing with some kind of database backend
- actual deployment to an actual production environment
- gathering test and deployment results
But honestly, after this experiment I feel confident about using .NET core on Jenkins. And it almost pains me to say it, but Im kind of looking forward into experiment and building this little project. So, definitely to be continued soon.
If you’re curious about the full sources, or just want to peek at the complete picture, you can check out the repository at github.
Any questions? Feel free to shoot away below!