Category Archives: Uncategorized

ASP.NET Core in .NET 5 – pass parameters to actions

Passing parameters to actions is an essential part of building RESTful Web API. ASP.NET Core released as a part of .NET 5 offers multiple ways to pass parameters to methods, that represent your endpoints. Let’s see what they are.

Pass parameter as a part of an URL

When passing a parameter in a URL, you need to define a routing that would contain a parameter. Let’s have a look a the example:

    [Route("{daysForward}")]
    [HttpGet]
    public IActionResult Get(int daysForward)
    {
        var rng = new Random();
        return new JsonResult(new WeatherForecast
        {
            Date = DateTime.Now.AddDays(daysForward),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)]
        });
    }

This method returns a WeatherForecast for a single day in the future. DaysForward parameter represents how many days in advance weather forecast should be returned. Notice that daysForward is a part of the routing, so a valid URL to this endpoint will look like:

GET: weatherForecast/3

We can also use [FromRoute] attribute before variable type, but it will also work the same way by default.

   public IActionResult Get([FromRoute] int daysForward)

Pass parameter in a query string

This is a very common method for passing additional parameters, because it does not require us to change routing, so it is also backward compatible. It’s important if we were to change an existing solution.

Let’s look at a different method, that would return a collection of weather forecasts with a sorting option.

[HttpGet]
    public IEnumerable<WeatherForecast> Get([FromQuery]bool sortByTemperature = false)
    {
        var rng = new Random();
        var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = DateTime.Now.AddDays(index),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)]
        });

        if (sortByTemperature)
        {
            forecasts = forecasts.OrderByDescending(f => f.TemperatureC);
        }

        return forecasts;
    }

In this example, we pass on sortByTemperature parameter which is optional. Notice that we use [FromQuery] attribute to indicate, that it’s a variable taken from the query string. A URL to this endpoint would look like this:

GET: weatherForecast?sortByTemperature=true

You can put many parameters like this:

GET: weatherForecast?key1=value1&key2=value2&key3=value3

Remember, that URL needs to be encoded properly to work right. If you were to pass a parameter like this:

https://www.michalbialecki.com/?name=Michał Białecki

It will need to be encoded into:

https://www.michalbialecki.com/?name=Micha%C5%82%20Bia%C5%82ecki

Pass an object in a query string

When you’re passing a lot of query parameters, it might be worth to handle them as an object. Take a look at the code below:

    // GET: weatherForecast/GetFiltered?SortByTemperature=true&City=Poznan
    [HttpGet("GetFiltered")]
    public IEnumerable<WeatherForecast> GetFiltered([FromQuery]WeatherForecastFilters filters)
    {
        var rng = new Random();
        var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = DateTime.Now.AddDays(index),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)],
            City = filters.City
        });
 
        if (filters.SortByTemperature)
        {
            forecasts = forecasts.OrderByDescending(f => f.TemperatureC);
        }
 
        return forecasts;
    }

As you can see, I’m passing WeatherForecastFilters class as a query parameter. Notice that I’m using [FromQuery] attribute before the class name.

And WeatherForecastFilters class looks like this:

    public class WeatherForecastFilters
    {
        public bool SortByTemperature { get; set; }
 
        public string City { get; set; }
    }

It is a standard class, without any additional attributes. Let’s now make a request to the URL below:

GET: /WeatherForecast/GetFiltered?SortByTemperature=true&City=Poznan

The response would be:

So parameters can be passed-in by their names, even if they are inside the class. What’s more, they are accessible just by their names, without any class prefix and that means that all of the properties must have unique names.

Here is a screenshot from Visual Studio.

Handling query parameters as class properties will work, but only inside this class. It will not work with the nested objects – their properties will not be mapped.

Pass parameters with headers

Passing parameters in a request headers are less popular, but also widely used. It doesn’t show in a URL, so it’s less noticeable by the user. A common scenario for passing parameters in a header would be providing credentials or a parent request id to enable multi-application tracking. Let’s have a look at this example:

    [HttpPost]
    public IActionResult Post([FromHeader] string parentRequestId)
    {
        Console.WriteLine($"Got a header with parentRequestId: {parentRequestId}!");
        return new AcceptedResult();
    }

In order to send a POST request, we would need to use some kind of a tool, I’ll use Postman:

Here you see that I specified headers and parentRequestId is one of them.

Pass parameters in a request body

The most common way to pass the data is to include it in a request body. We can add a header Content-Type with value application/json and inform the receiver how to interpret this body. Let’s have a look at our example:

    [HttpPost]
    public IActionResult Post([FromBody] WeatherForecast forecast)
    {
        Console.WriteLine($"Got a forecast for data: {forecast.Date}!");
        return new AcceptedResult();
    }

We use [FromBody] attribute to indicate, that forecast will be taken from request body. In ASP.NET Core for .NET 5 we don’t need and serialize to deserialize json body to WeatherForecast object, it will work automatically. To send POST request, let’s use Postman once again:

Have in mind, that size of the request body is limited by the server. It can be anywhere between 1MB to 2GB. In ASP.NET Core 5 maximum request body size is around 28MB, but that can be changed. What if I would like to send bigger files than that, over 2GB? Then you should look into sending content as a stream or sending it in chunks.

Pass parameters in a form

Sending content in a form is not very common, but it is the best solution if you want to upload a file. Let’s have a look at the example:

    [HttpPost]
    public IActionResult SaveFile([FromForm] string fileName, [FromForm] IFormFile file)
    {
        Console.WriteLine($"Got a file with name: {fileName} and size: {file.Length}");
        return new AcceptedResult();
    }

This method doesn’t really send a file, but it will successfully receive a file from the request. The interface IFormFile is used specifically for handling a file.

When sending a request we need to set Content-Type to application/x-www-form-urlencoded and it the Body part, we need to choose a file:

Let’s see what do we get when we debug this code:

And the file is correctly read. An interesting fact is, that with IFormFile we get not only binary data but also a file type and name. So you might ask why I send a file name separately? This is because you might want to name file differently on the server, then the one you are sending.

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/PrimeHotel

 

 

Set up a SQL Server in a docker container

You might wonder, why would I do need to create a docker image with SQL Server? If I were to set up the infrastructure for test or production environment, I would set up a SQL Server in Azure. To automate this process I would follow infrastructure as a code a pattern, creating terraform script and deployment pipeline for it. That is a different case.

Scenarios where that can be useful:

  • you spend 3 days setting up a database to work with your app and you would like to have a back-up of this state
  • you are a front-end Mac user, that doesn’t really like to configure something with a Microsoft label 😉
  • you want to share a particular MS SQL database state with all developers, with a different setup
  • you need that DB just for a quick task and pulling that docker image will take only a few minutes

What do you need

Please mind the fact that I’m a w Windows user.

  • a docker implementation – I’m using Docker Desktop
  • Azure Data Studio, or any other tool, to connect to MS SQL Server
  • Docker Hub account, or any other docker repository

Creating a docker container

First of all, let’s check if Docker is installed on your machine. Just type `docker –version` and you should see something similar to this.

Now let’s pull a SQL Server docker image from Microsoft, this one is Server 2017 developer edition:

docker pull mcr.microsoft.com/mssql/server:2017-latest

Now let’s check what docker images I have on my machine.

I already had a SQL Server docker image, so I didn’t have to download it for the second time.

Now let’s create and run a docker container with the command:

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=myPass123" -p 1433:1433 --name primeHotelDb -d mcr.microsoft.com/mssql/server:2017-latest

A few comments on what is happening here:

  • we are creating a docker container with the name primeHotelDb
  • we are using a mcr.microsoft.com/mssql/server:2017-latest docker image
  • we are passing ACCEPT_EULA and SA_PASSWORD parameters
  • this will set up a password myPass123 for the SQL Server for user sa
  • SQL Server will be available on localhost on port 1433, which is a first of the two passed in

Now, let’s check that we can connect to this SQL Server, I’m using an Azure Data Studio, which is a handy tool and I must admit – much quicker then SQL Server Management Studio.

The connection was successful, let’s now make some changes and save it, so that we would know later on, that the state of this container was persisted.

As you can see, I created an empty HotelDB database.

Preserving the container state

The easiest way to preserve the state of a container is to send a docker repository. I choose Docker Hub cause it comes with Docker Desktop installation and it’s free for community use. 

But first, we need to:

  • commit changes made in our container to a new docker image
  • we need to name our image michalbialecki/prime-hotel-db

This is because my Docker Hub account is michalbialecki and it will accept repositories only in that format. This can be done with a single command

docker commit primeHotelDb michalbialecki/prime-hotel-db

Now let’s check how my docker images look like.

Apart from Microsoft’s original image, there is my new image named michalbialecki/prime-hotel-db.

Let’s send this image to Docker Hub. To do that we need to log in.

docker login --username=michalbialecki

And after providing a valid password you should see something like this:

Next thing we need to do it to push the image to Docker Hub.

docker push michalbialecki/prime-hotel-db

And our docker image will be pushed to the Docker Hub repository.

After the process finish, I can go to the browser and check my Docker Hub account.

Yay! There is my image! In Docker Hub I can add collaborators, so they will be able to push images to my repository as well.

Now let’s check that it really works. Let’s remove my hotel container and repository from my machine.

docker stop primeHotelDb
docker rm primeHotelDb

Now if you list your container, there should be no primeHotelDb (docker ps -a).

The next command will remove the docker image.

docker rmi michalbialecki/prime-hotel-db

You can check, that this image was removed (docker images).

The next thing to do is to pull a docker image from Docker Hub.

docker pull michalbialecki/prime-hotel-db

Now let’s create a new image and run the container under the name primeHotelDbV2.

docker run -p 1433:1433 --name primeHotelDbV2 michalbialecki/prime-hotel-db

Then if I connect to the container with Azure Data Studio, I will see, that my HotelDb exists. It means, that changes done to my container were persisted!

Summary

In this article, we learned how to:

  • set up a docker image with SQL Server inside
  • connect to an instance of SQL Server in a container
  • save you local changes to a new docker image
  • push and share docker image with Docker Hub  

As you could see, docker is nothing to be afraid of. It is a great way to host apps regardless of the operating system. It is also a great way to work with SQL Server without the need to install and configure everything yourself.

Hope you found that useful, cheers!

 

OData as a flexible data feed for React search

In this post, I’d like to show you a scenario, where OData makes perfect sense. This will be a React application with .Net Core 3.1 back-end with just a single endpoint serving filtered results.

What is OData

OData is a convention of building REST-ful interfaces, that define how resources should be exposed and how should be handled. In this url-based convention you can not only fetch data but also:

  • filter by one or more properties
  • select only those properties, that you need
  • fetch nested properties
  • take only top X entities
  • and more

How those urls could look like? 

  • Order products by rating
https://services.odata.org/OData/OData.svc/Products?$orderby=Rating asc
  • Getting the second page of 30 products
https://services.odata.org/OData/OData.svc/Products?$skip=30&$top=30
  • Selecting only Price and Name of a product
https://services.odata.org/OData/OData.svc/Products?$select=Price,Name

More examples can be found here (although it is an older version of the convention) – https://www.odata.org/documentation/odata-version-2-0/uri-conventions/

Create a project from a template

I created an application from a template from Visual Studio. This is a Web Api with React on the front-end.

It will work beautifully, right after you run it. You should see something like this:

This is the page that we will modify later.

I added an Entity Framework Core with NuGet packages:

  • Microsoft.EntityFrameworkCore
  • Microsoft.EntityFrameworkCore.Design
  • Microsoft.EntityFrameworkCore.SqlServer

Then created an aspnetcoreContext:

public partial class aspnetcoreContext : DbContext
{
    public aspnetcoreContext(DbContextOptions<aspnetcoreContext> options)
        : base(options)
    {
    }

    public virtual DbSet<Profile> Profiles { get; set; }
}

And Profiles class:

public partial class Profile
{
    public Guid Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string UserName { get; set; }
    public string Email { get; set; }
    public string Street { get; set; }
    public string City { get; set; }
    public string ZipCode { get; set; }
    public string Country { get; set; }
    public string PhoneNumber { get; set; }
    public string Website { get; set; }
    public string CompanyName { get; set; }
    public string Notes { get; set; }
}

Then with command:

dotnet ef migrations add InitMigration

I generated a EF Core migration to add a Profiles table to my database. Last missing piece is to run database upgrade at program startup to execute those migrations. So in Startup class I added:

private void UpgradeDatabase(IApplicationBuilder app)
{
    using (var serviceScope = app.ApplicationServices.CreateScope())
    {
        var context = serviceScope.ServiceProvider.GetService<aspnetcoreContext>();
        if (context != null && context.Database != null)
        {
            context.Database.Migrate();
        }
    }
}

And as the last instruction in the Configure method, I added:

UpgradeDatabase(app);

Simple, right? With very little work we created a Profiles table with EF Core migrations mechanism. This way creating a DB is a part of program start, so apart from providing a connection string, there is no need to do anything else to start this app.

You can check the full project in this GitHub repo.

Let’s start with building a OData endpoint

There are only a few lines that we need to add to the Startup class to make OData work.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews(mvcOptions =>
        mvcOptions.EnableEndpointRouting = false);

    services.AddOData();

    // Entity Framework
    services.AddDbContext<aspnetcoreContext>(options =>
          options.UseSqlServer(Configuration.GetConnectionString("LocalDB")));
}

In the Configure method add:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseMvc(routeBuilder =>
    {
        routeBuilder.Select().Expand().Filter().OrderBy().MaxTop(1000).Count();
        routeBuilder.MapODataServiceRoute("odata", "odata", GetEdmModel());
        routeBuilder.EnableDependencyInjection();
    });
}

This is all you need to configure OData, now let’s create a controller.

public class ProfilesController : ControllerBase
{
    private readonly aspnetcoreContext _localDbContext;

    public ProfilesController(aspnetcoreContext localDbContext)
    {
        _localDbContext = localDbContext;
    }

    [HttpGet]
    [EnableQuery()]
    public IQueryable<Profile> Get()
    {
        return _localDbContext.Profiles.AsNoTracking().AsQueryable();
    }
}

And that’s it. Now when you run your app and go to this url:

https://localhost:44310/odata

You will see the OData collection available.

And when you go to profiles and check top 50, you will see something like this:

Front-end side

React application is located in the ClientApp directory and what we need to change is in FetchData.js file.

I’m not an expert in front-end, but I managed to rewrite this part to hooks and include very simple logic to fetch data from OData endpoint.

You can check the full project in this GitHub repo.

The result

Check out how it works in this short movie. Notice how fast it is with around 500.000+ profiles. 

You probably noticed what kind of urls we fetch from the front-end. Let’s check for example this one:

https://localhost:44310/odata/profiles?$top=25&$filter=contains(FirstName,%27john%27)%20And%20contains(LastName,%27art%27)

Now, let’s run SQL Profiler and check what is called on the DB side:

Notice that I didn’t have to write this SQL, it was all generated for me.

The beauty of OData

This example showed how easy is to expose data with OData. It perfectly matches with Entity Framework Core and generates SQL for you. With a simple URL convention, you get huge possibilities of filtering and shaping the data you receive. 

From the functional point of view OData is:

  • very flexible, perfect for forms
  • can be used where API has many clients that want output in a different form
  • gives full REST-ful experience

I hope you like this post. All code can be found on my GitHub repo.

Enjoy!

 

Entity Framework Core health check

Health check are important, both of our selves, but also of ourrrrr micro-services. This is something I came across lately – a health check of your connection to a database via EF Core context. Let’s check this out!

To add a health check to EF Core you need a project:

  • WebAPI with .Net Core, I’m using 3.1
  • with Entity Framework Core and some DbContext

First install a nuget package

Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore

Now go to Startup.cs class and add this code in ConfigureServices method:

services
   .AddHealthChecks()
   .AddDbContextCheck<aspnetcoreContext>();

In the Configure method in the same class add:

app.UseEndpoints(endpoints =>
{
    endpoints.MapHealthChecks("/health");
});

Now run a service and go to endpoint /health, you should see:

Notice that I didn’t have to add any Controller class to make it work, it works out of the box.

But does it really check the database status? Now let’s break the connection string, and see what result will be:

So what it does underneath? Let’s check in SQL Server Profiles if it connects to DB.

So it really does call DB in this check – awesome.

Simple and effective. That’s how code should look like. You can find full code in my GitHub: https://github.com/mikuam/MichalBialecki.com.OData.Search 

Hope you enjoyed this, cheers!

 

 

12 things you need to know about .Net Core

.Net Core is an exciting framework to work with and if you’re wondering what is this fuss about, I’ll explain everything in just 12 statements.

Let’s not wait anymore and start!

1. .Net Core is a completely new framework

.Net Framework and .Net Core are completely separate frameworks. But why Microsoft decided to create something from scratch? Let’s see what are the drawbacks of .Net Framework:

  • it works only on Windows
  • it’s big, not modular
  • it’s not open-source
  • hosting ASP.Net pages is slow

As contrary to that a new framework was created – .Net Core. It runs on Windows, Linux and MacOS, is open-source and is designed with web requests handling in mind. 

2. It’s fast

If we take all the hosting frameworks, .Net Core will be in the top 3, which is already impressive. However, if we take a look only at popular languages and frameworks, ASP.Net Core will be well ahead of the competition. Just take a look at popular hosting frameworks like netty for Java, nginx for C or nodejs for JavaScript. This may be one of the biggest reasons why to switch to .Net Core.

3. Open-source

.Net Core is an open-source project, which means, that it’s sources are available online. Take a look at the page of .Net Platform on GitHub.

Over 190 repositories within this platform! But what does it mean, it’s open-source?

  • if you’re wondering how the framework method is written or what it does underneath, you can just go and have a look
  • developing .Net Core is no longer in hands of Microsoft employees, but also in the hands of the whole community around .Net Core. This is why the development of .Net Core is so fast and new features are coming almost every month
  • if you found a bug, you can raise an issue on GitHub or create a pull-request. You can become a framework contributor, how cool is that!

4. Simpler and more compact

If you take a look at the project structure, you will immediately see the difference between frameworks. Here is a project structure of .Net Core on top and .Net Framework underneath.

Those projects were created from a template from Visual Studio. Notice that .Net Core needs just one package, where .Net Framework needs the whole bunch of assemblies to start working.

Not let’s have a look at project files.

.Net Framework on the left, contains a lot of configurations and settings, whereas .Net Core doesn’t need anything like this. It’s much easier to read and to work with. What’s more, as our project gets bigger, this file will not grow so much as in .Net Framework, so it will still be very readable and great to work with.

5. Multiplatform

This is what I already mentioned, but I wanted to emphasise it even more. .Net Core is a multi-platform framework and this means:

  • we can run .Net Core project on Windows, Linux, and MacOS
  • we can develop .Net Core projects on above systems with Visual Studio or Visual Studio Code
  • we don’t need IIS to ASP.Net Core pages 
  • we can host .Net Core on many servers, like:
    • IIS
    • Azure app service
    • Docker
    • Ngnix
    • Apache

There is also a very hand .Net Core CLI, a command language, what works on multiple OSs, that we can use to:

  • build project
  • run test
  • publish project

6. Many things are supported out of the box

.Net Core offers quite a lot without installing many additional packages. It was designed with fast request processing in mind, so it’s ideal for creating Web APIs. As a result of that, writing a Web API from scratch is trivial, as well as adding many necessary features that you need:

  • Dependency Injection
  • JSON support
  • logging
  • easy configuration binding
  • Polly – retry mechanism

7. Modularity

.Net Framework is a big framework. In software development changing, testing and releasing big projects is hard and requires a lot of time. This is why .Net Framework wasn’t released frequently and wasn’t changing very rapidly. .Net Core however is modular and contains of more than 190 repositories, what does that mean? Theoretically:

  • every module can be worked on separately
  • every module can be released separately
  • work can be better distributed and done more in parallel

This proves right as .Net Core is developing rapidly, especially when you look at integration with Azure cloud.

Another advantage of having modular framework is that you only reference packages that you need. Because of that your app will be lighter and will be quicker to publish.

8. There are windows!

Yes! And this is a surprise for me, that .Net Core from version 3.0 has a Windows Forms implementation. This package contains the most popular elements from WinForms as I know from the old days. This is very exciting because you can benefit from all the goodies of .Net Core (performance, modularity), but the looks stay the same. Sadly there are no plans to make Windows Forms multi-platform, so you won’t be able to run in on Linux or MacOS.

9. .Net Core 3.1, what’s next?

Take a look at this graph:

On the top there is a line representing the development of .Net Framework and on the bottom, one that represents the development of Mono. FYI, Mono is a .Net Framework implementation that runs on Linux, but doesn’t support all of its features. In the middle, there is a .Net Core development.

.Net Core will contain all of the best features of .Net Framework and Mono and will build on top of that. What’s more, Microsoft says that .Net Core is the future and it is the framework that they will support. You may noticed, that next version of .Net Core will be called .Net 5. This is because version 4 could confuse developers. 

When it is coming? Soon – around November 2020. What’s new, .Net 5 will support IoT – mobile phones and gaming – XBOX. This means that a library once written can be used in all of those cases and I’m sure that development of new features will even increase.

10. You don’t need to rewrite everything

If you think of trying out .Net Core, but still using .Net Framework, don’t worry. .Net Core and .Net Framework can’t work together directly, but you can build your library targeting .Net Standard. .Net Framework 4.6.2 can reference .Net Standard 2.0 libraries and higher versions of .Net Framework supports a higher version of .Net Standard. All supported versions are listed here, at the bottom: https://dotnet.microsoft.com/platform/dotnet-standard.

I personally work in a solution, where we have multiple older projects in .Net Framework, but a few ones in .Net Standard as well, working together.

.Net Standard is a subset of libraries of .Net Core, so you have most of the features available, but you can write code the supports older frameworks.

Important note – porting existing projects to .Net Core can be trivial or not possible at all. The problem might be with legacy libraries that were written for .Net Framework and will not work on .Net Core. In those cases, you would need to rewrite your project to use different libraries and that can take time.

Many thins in .Net Core is rewritten or written anew, so methods and interfaces might differ. However, most changes are in configuration, dependency injection, and handling request pipeline.

11. Should I learn it?

Yes! There are many reasons, but let’s look at the most important ones:

  • Microsoft says it’s the future and will be maintained and developed
  • It’s easy to work with, multi-platform and open source
  • It’s fast!

12. Stay informed

 One last thing, as an addition. If you like this post and you’d like to stay informed about .Net Core, let’s stay in contact:

You can also learn .Net Core 3 with me on Udemy. Here is a link to the best possible offer. I’m sure you won’t get disappointed: https://www.udemy.com/course/microsoft-aspnet-core-30/?referralCode=8CD54D26BD8929CC27EB 

That’s for reading, cheers!

Pimp your repo with GitHub Actions!

Do you have a GitHub account with a repository? Improve it with GitHub Actions! GitHub Actions lets you build your own workflows triggered by all kinds of events from your repositories. If you go and check this website, it looks very promising.

Let’s start with a build

To start working with GitHub Actions, just go to Actions tab in your repository page.

As my repo is built in .Net Core, I can choose this template that GitHub suggests me. After that, we will be able to edit yml file to set up our first workflow. Let’s check how it looks like:

name: .NET Core

on: 
    push:
        branches:
            - github-actions

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v1
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.100
    - name: Build with dotnet
      run: dotnet build --configuration Release

What to notice:

  • “on” tells us what repository event will trigger our flow. Mine will be triggered by a push to branch “github-actions”
  • “jobs” – each of those will be visible as a separate big steps
  • “build” – job name, that can have multiple small steps
  • “runs-on” identify on which operating system workflow is going to be run. You can choose from: ubuntu-latest, macos-latest and windows-latest and because .Net Core can be built on linux, I choose ubuntu-latest

After a few tries to get it right, it resulted as this:

Let’s run unit tests

My repository has a project dedicated to unit tests, so I’d like to run it and check if all tests are passing. In order to achieve that, I just need to add a few lines to my yml file.

name: .NET Core

on: 
    push:
        branches:
            - github-actions

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v1
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.100
    - name: Build with dotnet
      run: dotnet build --configuration Release
    - name: Run unit tests
      run: dotnet test --no-build --configuration Release

After committing that file, my workflow was run instantly and it took only under a minute to see the results.

What about code coverage?

One of the cool things that we use are actions provided by other users. In order to check our project code coverage, we need to do some things on our side, but to expose the result, we can integrate with Coveralls.io. Let’s go step by step.

The first thing we need to do is to install coverlet.msbuild nuget package in our test projects. This will enable us to generate code coverage file in lcov format.

For code coverage, I created a separate flow.

name: Code coverage

on: 
    push:
        branches:
            - github-actions

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v1
      - name: Setup .NET Core
        uses: actions/setup-dotnet@v1
        with:
           dotnet-version: 3.0.100
      - name: Generate coverage report
        run: |
          cd ./Tests/TicketStore.Tests/
          dotnet test /p:CollectCoverage=true /p:CoverletOutput=TestResults/ /p:CoverletOutputFormat=lcov
      - name: Publish coverage report to coveralls.io
        uses: coverallsapp/github-action@v1.0.1
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          path-to-lcov: ./Tests/TicketStore.Tests/TestResults/coverage.info   

This is a slightly more complicated case, where we run “dotnet test” with additional parameters to output code coverage. Then we use coverallsapp GitHub Action to integrate with Coveralls.io. In order to make it secure, we use an existing GitHub token. The result can be checked on the Coveralls page.

Let’s add a cherry on top. Coveralls.io generates a badge, that we can use in our Readme.md file. If I add a link like this at the top:

![Build Status](https://github.com/mikuam/TicketStore/workflows/.NET%20Core/badge.svg?branch=github-actions) [![Coverage Status](https://coveralls.io/repos/github/mikuam/TicketStore/badge.svg?branch=github-actions)](https://coveralls.io/github/mikuam/TicketStore?branch=github-actions) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/mikuam/TicketStore/blob/master/LICENSE)

Then I can finally see my badges.

Go check for yourself: https://github.com/mikuam/TicketStore

Why it’s so exciting?

I found GitHub Actions very exciting, because:

  • it’s free! I host many of my pet projects on GitHub and it never cost me a dime, but now I can configure CI/CD process for free as well
  • it supports many languages and frameworks. Even for .Net Framework projects that do not run on .Net Core, I can build them on windows and set up the whole process. I can even run PowerShell scripts
  • I can do more and automate things like deploying to Azure or creating NuGet package

Important thing that I noticed is that it doesn’t work with projects in .Net Core 3.1, but when I updated projects to .Net Core 3.0 it worked fine.

I really enjoyed playing around with GitHub Actions and felt a bit like a DevOps guy:) I’m not very experienced in building infrastructure like that, but it was very simple and intuitive. And those badges… now my repository looks professional! 🙂

ASP.Net Core 3 – configuration

In this chapter, we will cover how we can use configuration in ASP.Net Core 3. But before diving in, let’s see for a moment how it looked in plain old ASP.Net

Configuration before .Net Core

In old ASP.Net configuration was handled typically in only one XML file – Web.config. It was a place where everything was placed. From connection strings to assembly versions and detailed framework settings. This file got bigger and bigger while our project grew and was hard to read. Luckily you could use different files if you link them in the Web.config. Here is how the most important part of this file looked like:

The XML format was perfectly readable, but you would need to follow a specific convention. Let’s now see how things changed in .Net Core.

New possibilities

In .Net Core 3 things are completely different:

  • configuration can be stored in many files and most common format is JSON
  • we can follow our own format
  • we can have nesting
  • configuration can be parsed to whole classes with nested objects
  • we can make configuration refreshable when an application is running, without the need to restart it

Instead of having a configuration in an XML format, in .Net Core we have much more possibilities. Here are the sources that we can use(that are supported by default by the framework):

  • Azure Key Vault
  • Azure App Configuration
  • Command-line arguments
  • Directory files (INI, JSON, XML)
  • Environment variables (by default prefixed by DOTNET_)
  • In-memory .Net objects
  • Settings files
  • and custom providers

Note that variables can be overridden when another source provides the same variable. Then an order that we apply configuration with is important.

One more thing – we can have different configurations for the environment. In order to achieve that, we can suffix our configuration files with environment names. We have environment names set by the framework to: development, staging, production. So we can name our files like this:

  • appsettings.json – that would contain development variables
  • appsettings.production.json – that would contain production variables

Let’s use a configuration in .Net Core 3

First, let’s have an example of an appsettings.json configuration file:

In this configuration file, I have an Email section for configuration to send e-mails. In order to fetch the whole configuration at once, I’ll add three classes:

    public class ServiceConfiguration
    {
        public ConnectionStringsConfiguration ConnectionStrings { get; set; }

        public EmailConfiguration Email { get; set; }
    }

    public class EmailConfiguration
    {
        public string SmtpServerAddress { get; set; }

        public int SmtpServerPort { get; set; }

        public string SenderAddress { get; set; }
    }

    public class ConnectionStringsConfiguration
{
public string DB { get; set; }
}

ServiceConfiguration represents whole configuration and EmailConfiguration represents Email section.

Let’s now go to my Startup class. IConfiguration is an interface for handling configuration, provided by the framework and registered in DI by default. In ConfigureServices we can bind configuration and register it in Dependency Injection container.

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // configuration
        var serviceConfiguration = new ServiceConfiguration();
        Configuration.Bind(serviceConfiguration);
        services.AddSingleton(serviceConfiguration);
    }

Notice that we need just those 3 lines to make it work. Now let’s see how we can implement class, that would send e-mails.

    public class EmailSenderService : IEmailSenderService
    {
        private readonly EmailConfiguration emailConfiguration;
        private readonly SmtpClient _client;

        public EmailSenderService(ServiceConfiguration configuration)
        {
            emailConfiguration = configuration.Email;
            _client = new SmtpClient(emailConfiguration.SmtpServerAddress, emailConfiguration.SmtpServerPort);
        }

        public async Task SendEmail(string emailAddress, string content)
        {
            var message = new MailMessage(emailConfiguration.SenderAddress, emailAddress)
            {
                Subject = content
            };

            await _client.SendMailAsync(message);
        }
    }

This is programming pleasure in its purest form. Notice what we are injecting ServiceConfiguration, that is our representation of configuration in code. We do not need to use or parse JSON files, digg for nested variables. We just fetch configuration the way we defined it – simple.

What you don’t need to do

In many tutorials and even in official Microsoft documentation you could see, that in order to read appsettings.json file, you would need to make this change in Program.cs file:

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureAppConfiguration((hostingContext, config) =>
            {
                config.AddJsonFile("appsettings.json");
            })
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });

The truth is that appsettings.json file will be read by default by the framework without using ConfigureAppConfiguration.

This can cause you problems when deploying to Azure. When I was doing it for the first time it took me hours to figure that out. I set up variables in App Sevice configuration, but they were overridden by local appsettings.json file because of this line. This configuration was applied after reading the configuration from App Service. You can safely remove this line.

 

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/TicketStore

 

ASP.Net Core 3 – Dependency Injection

Dependency Injection is a fundamental concept in computer programming. Successfully implemented in many programming languages. What makes it so useful and how .Net Core 3 supports it?

Let’s start with the definition.

Dependency Injection is a software design pattern where dependencies are not created by the client, but rather passed to the client.

In common usage, instead of creating dependencies by new keyword, we will define what we need. We delegate the responsibility of passing those to the injector. Class does not need to know how to create dependency and it’s not a part of its logic.

With a separation of creation and behavior of our service we can build loosely coupled services. In our classes, we concentrate on how it’s going to behave. 

This concept, Dependency Injection, is a part of a broader concept – Inversion of Control. Dependency Injection, DI for short, follows two SOLID principles: dependency inversion and single responsibility principle. This concept is crucial for creating well-designed and well-decoupled software, it’s just a must-have.

Why creating dependencies on your own is a bad idea?

    var ratingsProvider = new MovieRatingProvider(new MoviesClient("connection"), 3);
  • to change implementation of a dependency (in this case MovieRatingProvider), we need to change all places where it is used
  • to create dependency we need to create all of its dependencies as well (MoviesClient)
  • it’s hard to write unit test – dependencies should be easily mocked. In test we do not want to create MoviesClient, we want to mock it and test MovieRatingProvider

Did you know? There is a way of programming, where you are not allowed to use new keyword outside of the dedicated factory. So you not only delegate creating dependencies to the DI, but also create factories to create all other objects. It’s a good concept, that also not always makes sense.

Practical example

Let’s say we have a Web API for getting events. We have EventsController, that gets events from EventsProvider and it gets movie ratings from MovieRatingProvider. On the schema it will look like this:

Now lets see the code, EventsController looks like this:

    [Route("[controller]")]
    [ApiController]
    public class EventsController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            try
            {
                var provider = new EventProvider();

                return new JsonResult(provider.GetActiveEvents());
            }
            catch (Exception)
            {
                // logging
                return StatusCode(StatusCodes.Status500InternalServerError);
            }
        }
    }

You see that EvenProvider, a dependency, is created by new keyword. Here is how it looks like:

    public class EventProvider
    {
        public IEnumerable<Event> GetActiveEvents()
        {
            var events =  GetAllEvents();

            return ApplyRatings(events);
        }

        private IEnumerable<Event> ApplyRatings(IEnumerable<Event> events)
        {
            var ratingsProvider = new MovieRatingProvider();
            var movieRatings = ratingsProvider.GetMovieRatings(
                events.Where(e => e.Type == EventType.Movie)
                      .Select(m => m.Title));

            foreach (var rating in movieRatings)
            {
                var eventToRate = events.FirstOrDefault(e => e.Title == rating.Key);
                if (eventToRate != null)
                {
                    eventToRate.Rating = rating.Value;
                }
            }

            return events;
        }

        private static IEnumerable<Event> GetAllEvents()
        {
            // some list here
            return Enumerable.Empty<Event>();
        }
    }

EventProvider is a bit more complicated. It get all events and then for movies it searches for movie ratings and try to apply them. Last dependency – MovieRatingProvider looks like this:

    public class MovieRatingProvider
    {
        public IDictionary<string, decimal> GetMovieRatings(IEnumerable<string> movieTitles)
        {
            var random = new Random();

            var ratings = movieTitles
                .Distinct()
                .Select(title => new KeyValuePair<string, decimal>(title, (decimal)random.Next(10, 50) / 10));

            return new Dictionary<string, decimal> (ratings);
        }
    }

The first step

What should be the first step to introduce Dependency Injection? Interfaces! We need to introduce interfaces for all our dependencies:

    public interface IEventProvider
    {
        IEnumerable<Event> GetActiveEvents();
    }

    public interface IMovieRatingProvider
    {
        IDictionary<string, decimal> GetMovieRatings(IEnumerable<string> movieTitles);
    }

And we need to decorate our classes with it:

public class EventProvider : IEventProvider

public class MovieRatingProvider : IMovieRatingProvider

The second step

We need to use our interfaces, instead of concrete classes. How do we do that? We need to pass an interface to our class.

There are two popular ways:

  • constructor injection
  • property injection

We will use the first one and I think this is a better one because the constructor is the one place that gathers all dependencies together. Let’s see how it looks in EventsController:

    [Route("[controller]")]
    [ApiController]
    public class EventsController : ControllerBase
    {
        private readonly IEventProvider _eventProvider;

        public EventsController(IEventProvider eventProvider)
        {
            _eventProvider = eventProvider;
        }

        [HttpGet]
        public IActionResult Get()
        {
            try
            {
                return new JsonResult(_eventProvider.GetActiveEvents());
            }
            catch (Exception)
            {
                // logging
                return StatusCode(StatusCodes.Status500InternalServerError);
            }
        }
    }

We are passing IEventProvider in the constructor and save it as private property. It will not be available outside of that class, but you will be able to use the same instance of EventProvider in every method of your class, brilliant!

Now let’s look at the EventProvider:

    public class EventProvider : IEventProvider
    {
        private readonly IMovieRatingProvider _movieRatingProvider;

        public EventProvider(IMovieRatingProvider movieRatingProvider)
        {
            _movieRatingProvider = movieRatingProvider;
        }

        public IEnumerable<Event> GetActiveEvents()
        {
            var events =  GetAllEvents();

            return ApplyRatings(events);
        }

        private IEnumerable<Event> ApplyRatings(IEnumerable<Event> events)
        {
            var movieRatings = _movieRatingProvider.GetMovieRatings(
                events.Where(e => e.Type == EventType.Movie)
                      .Select(m => m.Title));

            foreach (var rating in movieRatings)
            {
                var eventToRate = events.FirstOrDefault(e => e.Title == rating.Key);
                if (eventToRate != null)
                {
                    eventToRate.Rating = rating.Value;
                }
            }

            return events;
        }

        private static IEnumerable<Event> GetAllEvents()
        {
            // some list here
            return Enumerable.Empty<Event>();
        }
    }

An implementation of IMovieRatingProvider is also passed with constructor injection and saved in a private property. All is ready for…

The Final step

In .Net Core 3 support for Dependency Injection is built-in into the framework, therefore, you don’t need to do much to make it work. All you need to do is to go to Startup class and in ConfigureServices method, add registrations of your services.

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();

        services.AddScoped<IMovieRatingProvider, MovieRatingProvider>();
        services.AddScoped<IEventProvider, EventProvider>();
    }

I added services.AddScoped methods, that bind an interface to the class that we implement. This is how the framework knows what instance of a class passes when you define your dependency with an interface. Simple as that, those where all of the changes that needed to be introduced to make it work. Let’s see the new application schema side by side with the old one:

Single point of configuration

The Startup class and ConfigureServices method is the only place where we need to put configuration for the whole application. Even if you are using multiple projects, you will need to configure DI only once in Startup class. This applies for a single executable project, like Web API or ASP.Net website. If you have two projects like this in your solution, you would need to configure DI in both of them.

Service lifetimes

You probably noticed that I used AddScoped method to register my dependencies. This is one of the three service lifetimes that you can use:

  • Transient (AddTransient)
  • Scoped (AddScoped)
  • Singleton (AddSingleton)

Those service lifetimes are pretty standard for any Dependency Injection container. Let’s have a quick look at what they are for.

Scoped lifetime – the most popular one. If you register your services as scoped, it will be created only once per request. It means, that whenever you use the same interface representing your dependency, the same instance will be returned within one request. Let’s have a look at this simple example:

Here when ProductController is called, it depends on ProductService and OrderService, which also depends on ProductService. In this case .Net Core 3 DI will resolve ProductService dependency twice, but it will create ProductService once and return the same object in both places. This will happen in scoped lifetime, because it is the same request. In most cases, you should be using this one.

Transient lifetime – if you register your service with a transient lifetime, you will get a new object whenever you fetch it as a dependency, no matter if it is a new request or the same one

Singleton lifetime – this is the most tricky one. Singleton is a design pattern that is very well known. In this pattern whenever you use an object, you will get the same instance of it, every time, even in a different request or a different thread. This is an invitation to problems and this is also why it is called an antipattern sometimes. With singleton lifetime you will get the same instance of your object every time, for the whole lifetime of your application. It’s not a bad idea to use singleton, but you need to implement it in a thread-sefe manner. It may be useful whenever the creation of an object is expensive (time or resource-wise) and you would rather keep it in memory for the next usage, then creating it every time. For example, you can use a singleton to create a service to send an email. Creating SmtpClient is expensive and can be done only once.

    public class EmailSenderService : IEmailSenderService
    {
        private readonly IConfiguration _configuration;

        private readonly SmtpClient _client;

        public EmailSenderService(IConfiguration configuration)
        {
            _configuration = configuration;

            var smtpServerAddress = _configuration.GetValue<string>("Email:smtpServerAddress");
            var smtpServerPort = _configuration.GetValue<int>("Email:smtpServerPort");
            _client = new SmtpClient(smtpServerAddress, smtpServerPort);
        }

        public async Task SendEmail(string emailAddress, string content)
        {
            var fromAddress = _configuration.GetValue<string>("Email:senderAddress");
            var message = new MailMessage(fromAddress, emailAddress);
            message.Subject = content;

            await _client.SendMailAsync(message);
        }
    }

And in Startup class:

services.AddSingleton<IEmailSenderService, EmailSenderService>();

Is built-in DI container enough?

This is an important question to ask. Microsoft did a great job in developing a Dependency Injection container, but there are several great solutions out there that are ready to use. Actually, Microsoft lists them on an official documentation page: https://docs.microsoft.com/en-gb/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-3.1#default-service-container-replacement

So what are popular DI containers you might try out?

  • Autofac
  • Lamar
  • Scrutor

And those are only a few, there are more. The important thing is how they are different from a built-in container. The obvious answer is that they offer more. So what Microsofts built-in container doesn’t offer?

  • property injection
  • custom lifetime management
  • lazy initialization
  • auto initialization based on name

I must admit, that I miss the last one the most. In SimpleInjector for .Net Core 2.1 it was possible to register dependencies if only they follow naming convention with an interface having the same name as implementing class, with preceding ‘I’. There was no need to write registration for 90% of cases. 

So what I would use?

I would use a built-in container, whenever I don’t need any specific features. Without 3rd party nuget packages code is cleaner and easier to understand. .Net Core 3 doest pretty good job and you probably won’t need anything else. 

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/TicketStore

ASP.Net Core 3 – pass parameters to actions

Passing parameters to actions is an essential part of building RESTful Web API. .Net Core offers multiple ways to pass parameters to methods, that represent your endpoints. Let’s see what they are.

Pass parameter as a part of an url

When passing a parameter in a url, you need to define a routing that would contain a parameter. Let’s have a look a the example:

    [Route("{daysForward}")]
    [HttpGet]
    public IActionResult Get(int daysForward)
    {
        var rng = new Random();
        return new JsonResult(new WeatherForecast
        {
            Date = DateTime.Now.AddDays(daysForward),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)]
        });
    }

 

This method returns a WeatherForecast for a single day in the future. DaysForward parameter represents how many days in advance weather forecast should be returned for. Notice that daysForward is a part of the routing, so a valid url to this endpoint will look like:

GET: weatherForecast/3

We can also use [FromRoute] attribute before variable type, but it will also work the same way by default.

   public IActionResult Get([FromRoute] int daysForward)

Pass parameter in a query string

This is a vary common method for passing additional parameters, because it does not require for us to change routing, so it is also backward compatible. It’s important if we were to change an existing solution.

Let’s look at a different method, that would return a collection of weather worecasts with a sorting option.

[HttpGet]
    public IEnumerable<WeatherForecast> Get([FromQuery]bool sortByTemperature = false)
    {
        var rng = new Random();
        var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = DateTime.Now.AddDays(index),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)]
        });

        if (sortByTemperature)
        {
            forecasts = forecasts.OrderByDescending(f => f.TemperatureC);
        }

        return forecasts;
    }

In this example we pass on sortByTemperature parameter which is optional. Notice that we use [FromQuery] attribute to indicate, that it’s a variable taken from query string. A url to this endpoint would look like this:

GET: weatherForecast?sortByTemperature=true

You can put many prameters like this:

GET: weatherForecast?key1=value1&key2=value2&key3=value3

Remember, that url needs to be encoded properly to work right. If you were to pass a parameter like this:

https://www.michalbialecki.com/?name=Michał Białecki

It will need to be encoded into:

https://www.michalbialecki.com/?name=Micha%C5%82%20Bia%C5%82ecki

Pass parameters with headers

Passing parameters in a request headers are less popular, but also widely used. It doesn’t show in a url, so it’s less noticeable by the user. A common scenario for passing parameters in a header would be providing credentials or a parent request id to enable multi-application tracking. Let’s have a look at this example:

    [HttpPost]
    public IActionResult Post([FromHeader] string parentRequestId)
    {
        Console.WriteLine($"Got a header with parentRequestId: {parentRequestId}!");
        return new AcceptedResult();
    }

In order to send a POST request, we would need to use some kind of a tool, I’ll use Postman:

Here you see that I specified headers and parentRequestId is one of them.

Pass parameters in a request body

The most common way to pass the data is to include it in a request body. We can add a header Content-Type with value application/json and inform the receiver how to interpret this body. Let’s have a look at our example:

    [HttpPost]
    public IActionResult Post([FromBody] WeatherForecast forecast)
    {
        Console.WriteLine($"Got a forecast for data: {forecast.Date}!");
        return new AcceptedResult();
    }

We use [FromBody] attribute to indicate, that forecast will be taken from request body. In .Net Core 3 we don’t need and serialize to deserialize json body to WeatherForecast object, it will work automatically. To send POST request, let’s use Postman once again:

Have in mind, that size of the request body is limited by the server. It can be anywhere between 1MB to 2GB. In ASP.Net Core maximum request body size is around 28MB, but that can be changed. What if I would like to send bigger files than that, over 2GB? Then you should look into sending content as a stream or sending it in chunks.

Pass parameters in a form

Sending content in a form is not very common, but it is the best solution if you want to upload a file. Let’s have a look at the example:

    [HttpPost]
    public IActionResult SaveFile([FromForm] string fileName, [FromForm] IFormFile file)
    {
        Console.WriteLine($"Got a file with name: {fileName} and size: {file.Length}");
        return new AcceptedResult();
    }

This method doesn’t really send a file, but it will successfully receive a file from the request. The interface IFormFile is used specifically for handling a file.

When sending a request we need to set Content-Type to application/x-www-form-urlencoded and it the Body part, we need to choose a file:

Let’s see what do we get when we debug this code:

And the file is correctly read. An interesting fact is, that with IFormFile we get not only binary data but also a file type and name. So you might ask why I send a file name separately? This is because you might want to name file differently on the server, then the one you are sending.

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/TicketStore

Visual Studio has now solution filtering

Recently I’ve been working with a solution, that has 85 projects and some of them are really big. Analyzing this amount of data for Visual Studio 2019 and JetBrains Resharper causes difficulties and leads to visible slowdowns. In this huge solution, there are multiple services and websites, but usually, I work with only one at a time. Here’s what I did.

Visual Studio 2019 introduced solution filtering, so now when you open a solution from a directory, you can choose:

Do not load projects

Now you can load only the project you need.

Then you can go and load all project dependencies.

What you end up with is a subset of projects that you need for the main project you are really working on.

Will that stay?

Yes and no 🙂 You can notice that filtering out projects will be saved after you open your solution next time in a normal way. However, that does not cause and change in the repository, so the change will not be visible across the team. That might be a good thing – your settings will not affect the work of others. But what if I’d like to share that setting?

You can go ahead and right-click on the solution and choose Save As Solution Filter.

You will end up with .slnf file, that you can open as any other solution file.

In my solution with 85 projects(not the one on the screenshots), I can feel the difference right away. I strongly recommend you try this if you have a similar situation. Cheers!