Tag Archives: .net core

OData as a flexible data feed for React search

In this post, I’d like to show you a scenario, where OData makes perfect sense. This will be a React application with .Net Core 3.1 back-end with just a single endpoint serving filtered results.

What is OData

OData is a convention of building REST-ful interfaces, that define how resources should be exposed and how should be handled. In this url-based convention you can not only fetch data but also:

  • filter by one or more properties
  • select only those properties, that you need
  • fetch nested properties
  • take only top X entities
  • and more

How those urls could look like? 

  • Order products by rating
https://services.odata.org/OData/OData.svc/Products?$orderby=Rating asc
  • Getting the second page of 30 products
https://services.odata.org/OData/OData.svc/Products?$skip=30&$top=30
  • Selecting only Price and Name of a product
https://services.odata.org/OData/OData.svc/Products?$select=Price,Name

More examples can be found here (although it is an older version of the convention) – https://www.odata.org/documentation/odata-version-2-0/uri-conventions/

Create a project from a template

I created an application from a template from Visual Studio. This is a Web Api with React on the front-end.

It will work beautifully, right after you run it. You should see something like this:

This is the page that we will modify later.

I added an Entity Framework Core with NuGet packages:

  • Microsoft.EntityFrameworkCore
  • Microsoft.EntityFrameworkCore.Design
  • Microsoft.EntityFrameworkCore.SqlServer

Then created an aspnetcoreContext:

public partial class aspnetcoreContext : DbContext
{
    public aspnetcoreContext(DbContextOptions<aspnetcoreContext> options)
        : base(options)
    {
    }

    public virtual DbSet<Profile> Profiles { get; set; }
}

And Profiles class:

public partial class Profile
{
    public Guid Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string UserName { get; set; }
    public string Email { get; set; }
    public string Street { get; set; }
    public string City { get; set; }
    public string ZipCode { get; set; }
    public string Country { get; set; }
    public string PhoneNumber { get; set; }
    public string Website { get; set; }
    public string CompanyName { get; set; }
    public string Notes { get; set; }
}

Then with command:

dotnet ef migrations add InitMigration

I generated a EF Core migration to add a Profiles table to my database. Last missing piece is to run database upgrade at program startup to execute those migrations. So in Startup class I added:

private void UpgradeDatabase(IApplicationBuilder app)
{
    using (var serviceScope = app.ApplicationServices.CreateScope())
    {
        var context = serviceScope.ServiceProvider.GetService<aspnetcoreContext>();
        if (context != null && context.Database != null)
        {
            context.Database.Migrate();
        }
    }
}

And as the last instruction in the Configure method, I added:

UpgradeDatabase(app);

Simple, right? With very little work we created a Profiles table with EF Core migrations mechanism. This way creating a DB is a part of program start, so apart from providing a connection string, there is no need to do anything else to start this app.

You can check the full project in this GitHub repo.

Let’s start with building a OData endpoint

There are only a few lines that we need to add to the Startup class to make OData work.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews(mvcOptions =>
        mvcOptions.EnableEndpointRouting = false);

    services.AddOData();

    // Entity Framework
    services.AddDbContext<aspnetcoreContext>(options =>
          options.UseSqlServer(Configuration.GetConnectionString("LocalDB")));
}

In the Configure method add:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseMvc(routeBuilder =>
    {
        routeBuilder.Select().Expand().Filter().OrderBy().MaxTop(1000).Count();
        routeBuilder.MapODataServiceRoute("odata", "odata", GetEdmModel());
        routeBuilder.EnableDependencyInjection();
    });
}

This is all you need to configure OData, now let’s create a controller.

public class ProfilesController : ControllerBase
{
    private readonly aspnetcoreContext _localDbContext;

    public ProfilesController(aspnetcoreContext localDbContext)
    {
        _localDbContext = localDbContext;
    }

    [HttpGet]
    [EnableQuery()]
    public IQueryable<Profile> Get()
    {
        return _localDbContext.Profiles.AsNoTracking().AsQueryable();
    }
}

And that’s it. Now when you run your app and go to this url:

https://localhost:44310/odata

You will see the OData collection available.

And when you go to profiles and check top 50, you will see something like this:

Front-end side

React application is located in the ClientApp directory and what we need to change is in FetchData.js file.

I’m not an expert in front-end, but I managed to rewrite this part to hooks and include very simple logic to fetch data from OData endpoint.

You can check the full project in this GitHub repo.

The result

Check out how it works in this short movie. Notice how fast it is with around 500.000+ profiles. 

You probably noticed what kind of urls we fetch from the front-end. Let’s check for example this one:

https://localhost:44310/odata/profiles?$top=25&$filter=contains(FirstName,%27john%27)%20And%20contains(LastName,%27art%27)

Now, let’s run SQL Profiler and check what is called on the DB side:

Notice that I didn’t have to write this SQL, it was all generated for me.

The beauty of OData

This example showed how easy is to expose data with OData. It perfectly matches with Entity Framework Core and generates SQL for you. With a simple URL convention, you get huge possibilities of filtering and shaping the data you receive. 

From the functional point of view OData is:

  • very flexible, perfect for forms
  • can be used where API has many clients that want output in a different form
  • gives full REST-ful experience

I hope you like this post. All code can be found on my GitHub repo.

Enjoy!

 

Entity Framework Core health check

Health check are important, both of our selves, but also of ourrrrr micro-services. This is something I came across lately – a health check of your connection to a database via EF Core context. Let’s check this out!

To add a health check to EF Core you need a project:

  • WebAPI with .Net Core, I’m using 3.1
  • with Entity Framework Core and some DbContext

First install a nuget package

Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore

Now go to Startup.cs class and add this code in ConfigureServices method:

services
   .AddHealthChecks()
   .AddDbContextCheck<aspnetcoreContext>();

In the Configure method in the same class add:

app.UseEndpoints(endpoints =>
{
    endpoints.MapHealthChecks("/health");
});

Now run a service and go to endpoint /health, you should see:

Notice that I didn’t have to add any Controller class to make it work, it works out of the box.

But does it really check the database status? Now let’s break the connection string, and see what result will be:

So what it does underneath? Let’s check in SQL Server Profiles if it connects to DB.

So it really does call DB in this check – awesome.

Simple and effective. That’s how code should look like. You can find full code in my GitHub: https://github.com/mikuam/MichalBialecki.com.OData.Search 

Hope you enjoyed this, cheers!

 

 

12 things you need to know about .Net Core

.Net Core is an exciting framework to work with and if you’re wondering what is this fuss about, I’ll explain everything in just 12 statements.

Let’s not wait anymore and start!

1. .Net Core is a completely new framework

.Net Framework and .Net Core are completely separate frameworks. But why Microsoft decided to create something from scratch? Let’s see what are the drawbacks of .Net Framework:

  • it works only on Windows
  • it’s big, not modular
  • it’s not open-source
  • hosting ASP.Net pages is slow

As contrary to that a new framework was created – .Net Core. It runs on Windows, Linux and MacOS, is open-source and is designed with web requests handling in mind. 

2. It’s fast

If we take all the hosting frameworks, .Net Core will be in the top 3, which is already impressive. However, if we take a look only at popular languages and frameworks, ASP.Net Core will be well ahead of the competition. Just take a look at popular hosting frameworks like netty for Java, nginx for C or nodejs for JavaScript. This may be one of the biggest reasons why to switch to .Net Core.

3. Open-source

.Net Core is an open-source project, which means, that it’s sources are available online. Take a look at the page of .Net Platform on GitHub.

Over 190 repositories within this platform! But what does it mean, it’s open-source?

  • if you’re wondering how the framework method is written or what it does underneath, you can just go and have a look
  • developing .Net Core is no longer in hands of Microsoft employees, but also in the hands of the whole community around .Net Core. This is why the development of .Net Core is so fast and new features are coming almost every month
  • if you found a bug, you can raise an issue on GitHub or create a pull-request. You can become a framework contributor, how cool is that!

4. Simpler and more compact

If you take a look at the project structure, you will immediately see the difference between frameworks. Here is a project structure of .Net Core on top and .Net Framework underneath.

Those projects were created from a template from Visual Studio. Notice that .Net Core needs just one package, where .Net Framework needs the whole bunch of assemblies to start working.

Not let’s have a look at project files.

.Net Framework on the left, contains a lot of configurations and settings, whereas .Net Core doesn’t need anything like this. It’s much easier to read and to work with. What’s more, as our project gets bigger, this file will not grow so much as in .Net Framework, so it will still be very readable and great to work with.

5. Multiplatform

This is what I already mentioned, but I wanted to emphasise it even more. .Net Core is a multi-platform framework and this means:

  • we can run .Net Core project on Windows, Linux, and MacOS
  • we can develop .Net Core projects on above systems with Visual Studio or Visual Studio Code
  • we don’t need IIS to ASP.Net Core pages 
  • we can host .Net Core on many servers, like:
    • IIS
    • Azure app service
    • Docker
    • Ngnix
    • Apache

There is also a very hand .Net Core CLI, a command language, what works on multiple OSs, that we can use to:

  • build project
  • run test
  • publish project

6. Many things are supported out of the box

.Net Core offers quite a lot without installing many additional packages. It was designed with fast request processing in mind, so it’s ideal for creating Web APIs. As a result of that, writing a Web API from scratch is trivial, as well as adding many necessary features that you need:

  • Dependency Injection
  • JSON support
  • logging
  • easy configuration binding
  • Polly – retry mechanism

7. Modularity

.Net Framework is a big framework. In software development changing, testing and releasing big projects is hard and requires a lot of time. This is why .Net Framework wasn’t released frequently and wasn’t changing very rapidly. .Net Core however is modular and contains of more than 190 repositories, what does that mean? Theoretically:

  • every module can be worked on separately
  • every module can be released separately
  • work can be better distributed and done more in parallel

This proves right as .Net Core is developing rapidly, especially when you look at integration with Azure cloud.

Another advantage of having modular framework is that you only reference packages that you need. Because of that your app will be lighter and will be quicker to publish.

8. There are windows!

Yes! And this is a surprise for me, that .Net Core from version 3.0 has a Windows Forms implementation. This package contains the most popular elements from WinForms as I know from the old days. This is very exciting because you can benefit from all the goodies of .Net Core (performance, modularity), but the looks stay the same. Sadly there are no plans to make Windows Forms multi-platform, so you won’t be able to run in on Linux or MacOS.

9. .Net Core 3.1, what’s next?

Take a look at this graph:

On the top there is a line representing the development of .Net Framework and on the bottom, one that represents the development of Mono. FYI, Mono is a .Net Framework implementation that runs on Linux, but doesn’t support all of its features. In the middle, there is a .Net Core development.

.Net Core will contain all of the best features of .Net Framework and Mono and will build on top of that. What’s more, Microsoft says that .Net Core is the future and it is the framework that they will support. You may noticed, that next version of .Net Core will be called .Net 5. This is because version 4 could confuse developers. 

When it is coming? Soon – around November 2020. What’s new, .Net 5 will support IoT – mobile phones and gaming – XBOX. This means that a library once written can be used in all of those cases and I’m sure that development of new features will even increase.

10. You don’t need to rewrite everything

If you think of trying out .Net Core, but still using .Net Framework, don’t worry. .Net Core and .Net Framework can’t work together directly, but you can build your library targeting .Net Standard. .Net Framework 4.6.2 can reference .Net Standard 2.0 libraries and higher versions of .Net Framework supports a higher version of .Net Standard. All supported versions are listed here, at the bottom: https://dotnet.microsoft.com/platform/dotnet-standard.

I personally work in a solution, where we have multiple older projects in .Net Framework, but a few ones in .Net Standard as well, working together.

.Net Standard is a subset of libraries of .Net Core, so you have most of the features available, but you can write code the supports older frameworks.

Important note – porting existing projects to .Net Core can be trivial or not possible at all. The problem might be with legacy libraries that were written for .Net Framework and will not work on .Net Core. In those cases, you would need to rewrite your project to use different libraries and that can take time.

Many thins in .Net Core is rewritten or written anew, so methods and interfaces might differ. However, most changes are in configuration, dependency injection, and handling request pipeline.

11. Should I learn it?

Yes! There are many reasons, but let’s look at the most important ones:

  • Microsoft says it’s the future and will be maintained and developed
  • It’s easy to work with, multi-platform and open source
  • It’s fast!

12. Stay informed

 One last thing, as an addition. If you like this post and you’d like to stay informed about .Net Core, let’s stay in contact:

You can also learn .Net Core 3 with me on Udemy. Here is a link to the best possible offer. I’m sure you won’t get disappointed: https://www.udemy.com/course/microsoft-aspnet-core-30/?referralCode=8CD54D26BD8929CC27EB 

That’s for reading, cheers!

Pimp your repo with GitHub Actions!

Do you have a GitHub account with a repository? Improve it with GitHub Actions! GitHub Actions lets you build your own workflows triggered by all kinds of events from your repositories. If you go and check this website, it looks very promising.

Let’s start with a build

To start working with GitHub Actions, just go to Actions tab in your repository page.

As my repo is built in .Net Core, I can choose this template that GitHub suggests me. After that, we will be able to edit yml file to set up our first workflow. Let’s check how it looks like:

name: .NET Core

on: 
    push:
        branches:
            - github-actions

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v1
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.100
    - name: Build with dotnet
      run: dotnet build --configuration Release

What to notice:

  • “on” tells us what repository event will trigger our flow. Mine will be triggered by a push to branch “github-actions”
  • “jobs” – each of those will be visible as a separate big steps
  • “build” – job name, that can have multiple small steps
  • “runs-on” identify on which operating system workflow is going to be run. You can choose from: ubuntu-latest, macos-latest and windows-latest and because .Net Core can be built on linux, I choose ubuntu-latest

After a few tries to get it right, it resulted as this:

Let’s run unit tests

My repository has a project dedicated to unit tests, so I’d like to run it and check if all tests are passing. In order to achieve that, I just need to add a few lines to my yml file.

name: .NET Core

on: 
    push:
        branches:
            - github-actions

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v1
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.100
    - name: Build with dotnet
      run: dotnet build --configuration Release
    - name: Run unit tests
      run: dotnet test --no-build --configuration Release

After committing that file, my workflow was run instantly and it took only under a minute to see the results.

What about code coverage?

One of the cool things that we use are actions provided by other users. In order to check our project code coverage, we need to do some things on our side, but to expose the result, we can integrate with Coveralls.io. Let’s go step by step.

The first thing we need to do is to install coverlet.msbuild nuget package in our test projects. This will enable us to generate code coverage file in lcov format.

For code coverage, I created a separate flow.

name: Code coverage

on: 
    push:
        branches:
            - github-actions

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v1
      - name: Setup .NET Core
        uses: actions/setup-dotnet@v1
        with:
           dotnet-version: 3.0.100
      - name: Generate coverage report
        run: |
          cd ./Tests/TicketStore.Tests/
          dotnet test /p:CollectCoverage=true /p:CoverletOutput=TestResults/ /p:CoverletOutputFormat=lcov
      - name: Publish coverage report to coveralls.io
        uses: coverallsapp/github-action@v1.0.1
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          path-to-lcov: ./Tests/TicketStore.Tests/TestResults/coverage.info   

This is a slightly more complicated case, where we run “dotnet test” with additional parameters to output code coverage. Then we use coverallsapp GitHub Action to integrate with Coveralls.io. In order to make it secure, we use an existing GitHub token. The result can be checked on the Coveralls page.

Let’s add a cherry on top. Coveralls.io generates a badge, that we can use in our Readme.md file. If I add a link like this at the top:

![Build Status](https://github.com/mikuam/TicketStore/workflows/.NET%20Core/badge.svg?branch=github-actions) [![Coverage Status](https://coveralls.io/repos/github/mikuam/TicketStore/badge.svg?branch=github-actions)](https://coveralls.io/github/mikuam/TicketStore?branch=github-actions) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/mikuam/TicketStore/blob/master/LICENSE)

Then I can finally see my badges.

Go check for yourself: https://github.com/mikuam/TicketStore

Why it’s so exciting?

I found GitHub Actions very exciting, because:

  • it’s free! I host many of my pet projects on GitHub and it never cost me a dime, but now I can configure CI/CD process for free as well
  • it supports many languages and frameworks. Even for .Net Framework projects that do not run on .Net Core, I can build them on windows and set up the whole process. I can even run PowerShell scripts
  • I can do more and automate things like deploying to Azure or creating NuGet package

Important thing that I noticed is that it doesn’t work with projects in .Net Core 3.1, but when I updated projects to .Net Core 3.0 it worked fine.

I really enjoyed playing around with GitHub Actions and felt a bit like a DevOps guy:) I’m not very experienced in building infrastructure like that, but it was very simple and intuitive. And those badges… now my repository looks professional! 🙂

ASP.Net Core 3 – configuration

In this chapter, we will cover how we can use configuration in ASP.Net Core 3. But before diving in, let’s see for a moment how it looked in plain old ASP.Net

Configuration before .Net Core

In old ASP.Net configuration was handled typically in only one XML file – Web.config. It was a place where everything was placed. From connection strings to assembly versions and detailed framework settings. This file got bigger and bigger while our project grew and was hard to read. Luckily you could use different files if you link them in the Web.config. Here is how the most important part of this file looked like:

The XML format was perfectly readable, but you would need to follow a specific convention. Let’s now see how things changed in .Net Core.

New possibilities

In .Net Core 3 things are completely different:

  • configuration can be stored in many files and most common format is JSON
  • we can follow our own format
  • we can have nesting
  • configuration can be parsed to whole classes with nested objects
  • we can make configuration refreshable when an application is running, without the need to restart it

Instead of having a configuration in an XML format, in .Net Core we have much more possibilities. Here are the sources that we can use(that are supported by default by the framework):

  • Azure Key Vault
  • Azure App Configuration
  • Command-line arguments
  • Directory files (INI, JSON, XML)
  • Environment variables (by default prefixed by DOTNET_)
  • In-memory .Net objects
  • Settings files
  • and custom providers

Note that variables can be overridden when another source provides the same variable. Then an order that we apply configuration with is important.

One more thing – we can have different configurations for the environment. In order to achieve that, we can suffix our configuration files with environment names. We have environment names set by the framework to: development, staging, production. So we can name our files like this:

  • appsettings.json – that would contain development variables
  • appsettings.production.json – that would contain production variables

Let’s use a configuration in .Net Core 3

First, let’s have an example of an appsettings.json configuration file:

In this configuration file, I have an Email section for configuration to send e-mails. In order to fetch the whole configuration at once, I’ll add three classes:

    public class ServiceConfiguration
    {
        public ConnectionStringsConfiguration ConnectionStrings { get; set; }

        public EmailConfiguration Email { get; set; }
    }

    public class EmailConfiguration
    {
        public string SmtpServerAddress { get; set; }

        public int SmtpServerPort { get; set; }

        public string SenderAddress { get; set; }
    }

    public class ConnectionStringsConfiguration
{
public string DB { get; set; }
}

ServiceConfiguration represents whole configuration and EmailConfiguration represents Email section.

Let’s now go to my Startup class. IConfiguration is an interface for handling configuration, provided by the framework and registered in DI by default. In ConfigureServices we can bind configuration and register it in Dependency Injection container.

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // configuration
        var serviceConfiguration = new ServiceConfiguration();
        Configuration.Bind(serviceConfiguration);
        services.AddSingleton(serviceConfiguration);
    }

Notice that we need just those 3 lines to make it work. Now let’s see how we can implement class, that would send e-mails.

    public class EmailSenderService : IEmailSenderService
    {
        private readonly EmailConfiguration emailConfiguration;
        private readonly SmtpClient _client;

        public EmailSenderService(ServiceConfiguration configuration)
        {
            emailConfiguration = configuration.Email;
            _client = new SmtpClient(emailConfiguration.SmtpServerAddress, emailConfiguration.SmtpServerPort);
        }

        public async Task SendEmail(string emailAddress, string content)
        {
            var message = new MailMessage(emailConfiguration.SenderAddress, emailAddress)
            {
                Subject = content
            };

            await _client.SendMailAsync(message);
        }
    }

This is programming pleasure in its purest form. Notice what we are injecting ServiceConfiguration, that is our representation of configuration in code. We do not need to use or parse JSON files, digg for nested variables. We just fetch configuration the way we defined it – simple.

What you don’t need to do

In many tutorials and even in official Microsoft documentation you could see, that in order to read appsettings.json file, you would need to make this change in Program.cs file:

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureAppConfiguration((hostingContext, config) =>
            {
                config.AddJsonFile("appsettings.json");
            })
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });

The truth is that appsettings.json file will be read by default by the framework without using ConfigureAppConfiguration.

This can cause you problems when deploying to Azure. When I was doing it for the first time it took me hours to figure that out. I set up variables in App Sevice configuration, but they were overridden by local appsettings.json file because of this line. This configuration was applied after reading the configuration from App Service. You can safely remove this line.

 

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/TicketStore

 

ASP.Net Core 3 – Dependency Injection

Dependency Injection is a fundamental concept in computer programming. Successfully implemented in many programming languages. What makes it so useful and how .Net Core 3 supports it?

Let’s start with the definition.

Dependency Injection is a software design pattern where dependencies are not created by the client, but rather passed to the client.

In common usage, instead of creating dependencies by new keyword, we will define what we need. We delegate the responsibility of passing those to the injector. Class does not need to know how to create dependency and it’s not a part of its logic.

With a separation of creation and behavior of our service we can build loosely coupled services. In our classes, we concentrate on how it’s going to behave. 

This concept, Dependency Injection, is a part of a broader concept – Inversion of Control. Dependency Injection, DI for short, follows two SOLID principles: dependency inversion and single responsibility principle. This concept is crucial for creating well-designed and well-decoupled software, it’s just a must-have.

Why creating dependencies on your own is a bad idea?

    var ratingsProvider = new MovieRatingProvider(new MoviesClient("connection"), 3);
  • to change implementation of a dependency (in this case MovieRatingProvider), we need to change all places where it is used
  • to create dependency we need to create all of its dependencies as well (MoviesClient)
  • it’s hard to write unit test – dependencies should be easily mocked. In test we do not want to create MoviesClient, we want to mock it and test MovieRatingProvider

Did you know? There is a way of programming, where you are not allowed to use new keyword outside of the dedicated factory. So you not only delegate creating dependencies to the DI, but also create factories to create all other objects. It’s a good concept, that also not always makes sense.

Practical example

Let’s say we have a Web API for getting events. We have EventsController, that gets events from EventsProvider and it gets movie ratings from MovieRatingProvider. On the schema it will look like this:

Now lets see the code, EventsController looks like this:

    [Route("[controller]")]
    [ApiController]
    public class EventsController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            try
            {
                var provider = new EventProvider();

                return new JsonResult(provider.GetActiveEvents());
            }
            catch (Exception)
            {
                // logging
                return StatusCode(StatusCodes.Status500InternalServerError);
            }
        }
    }

You see that EvenProvider, a dependency, is created by new keyword. Here is how it looks like:

    public class EventProvider
    {
        public IEnumerable<Event> GetActiveEvents()
        {
            var events =  GetAllEvents();

            return ApplyRatings(events);
        }

        private IEnumerable<Event> ApplyRatings(IEnumerable<Event> events)
        {
            var ratingsProvider = new MovieRatingProvider();
            var movieRatings = ratingsProvider.GetMovieRatings(
                events.Where(e => e.Type == EventType.Movie)
                      .Select(m => m.Title));

            foreach (var rating in movieRatings)
            {
                var eventToRate = events.FirstOrDefault(e => e.Title == rating.Key);
                if (eventToRate != null)
                {
                    eventToRate.Rating = rating.Value;
                }
            }

            return events;
        }

        private static IEnumerable<Event> GetAllEvents()
        {
            // some list here
            return Enumerable.Empty<Event>();
        }
    }

EventProvider is a bit more complicated. It get all events and then for movies it searches for movie ratings and try to apply them. Last dependency – MovieRatingProvider looks like this:

    public class MovieRatingProvider
    {
        public IDictionary<string, decimal> GetMovieRatings(IEnumerable<string> movieTitles)
        {
            var random = new Random();

            var ratings = movieTitles
                .Distinct()
                .Select(title => new KeyValuePair<string, decimal>(title, (decimal)random.Next(10, 50) / 10));

            return new Dictionary<string, decimal> (ratings);
        }
    }

The first step

What should be the first step to introduce Dependency Injection? Interfaces! We need to introduce interfaces for all our dependencies:

    public interface IEventProvider
    {
        IEnumerable<Event> GetActiveEvents();
    }

    public interface IMovieRatingProvider
    {
        IDictionary<string, decimal> GetMovieRatings(IEnumerable<string> movieTitles);
    }

And we need to decorate our classes with it:

public class EventProvider : IEventProvider

public class MovieRatingProvider : IMovieRatingProvider

The second step

We need to use our interfaces, instead of concrete classes. How do we do that? We need to pass an interface to our class.

There are two popular ways:

  • constructor injection
  • property injection

We will use the first one and I think this is a better one because the constructor is the one place that gathers all dependencies together. Let’s see how it looks in EventsController:

    [Route("[controller]")]
    [ApiController]
    public class EventsController : ControllerBase
    {
        private readonly IEventProvider _eventProvider;

        public EventsController(IEventProvider eventProvider)
        {
            _eventProvider = eventProvider;
        }

        [HttpGet]
        public IActionResult Get()
        {
            try
            {
                return new JsonResult(_eventProvider.GetActiveEvents());
            }
            catch (Exception)
            {
                // logging
                return StatusCode(StatusCodes.Status500InternalServerError);
            }
        }
    }

We are passing IEventProvider in the constructor and save it as private property. It will not be available outside of that class, but you will be able to use the same instance of EventProvider in every method of your class, brilliant!

Now let’s look at the EventProvider:

    public class EventProvider : IEventProvider
    {
        private readonly IMovieRatingProvider _movieRatingProvider;

        public EventProvider(IMovieRatingProvider movieRatingProvider)
        {
            _movieRatingProvider = movieRatingProvider;
        }

        public IEnumerable<Event> GetActiveEvents()
        {
            var events =  GetAllEvents();

            return ApplyRatings(events);
        }

        private IEnumerable<Event> ApplyRatings(IEnumerable<Event> events)
        {
            var movieRatings = _movieRatingProvider.GetMovieRatings(
                events.Where(e => e.Type == EventType.Movie)
                      .Select(m => m.Title));

            foreach (var rating in movieRatings)
            {
                var eventToRate = events.FirstOrDefault(e => e.Title == rating.Key);
                if (eventToRate != null)
                {
                    eventToRate.Rating = rating.Value;
                }
            }

            return events;
        }

        private static IEnumerable<Event> GetAllEvents()
        {
            // some list here
            return Enumerable.Empty<Event>();
        }
    }

An implementation of IMovieRatingProvider is also passed with constructor injection and saved in a private property. All is ready for…

The Final step

In .Net Core 3 support for Dependency Injection is built-in into the framework, therefore, you don’t need to do much to make it work. All you need to do is to go to Startup class and in ConfigureServices method, add registrations of your services.

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();

        services.AddScoped<IMovieRatingProvider, MovieRatingProvider>();
        services.AddScoped<IEventProvider, EventProvider>();
    }

I added services.AddScoped methods, that bind an interface to the class that we implement. This is how the framework knows what instance of a class passes when you define your dependency with an interface. Simple as that, those where all of the changes that needed to be introduced to make it work. Let’s see the new application schema side by side with the old one:

Single point of configuration

The Startup class and ConfigureServices method is the only place where we need to put configuration for the whole application. Even if you are using multiple projects, you will need to configure DI only once in Startup class. This applies for a single executable project, like Web API or ASP.Net website. If you have two projects like this in your solution, you would need to configure DI in both of them.

Service lifetimes

You probably noticed that I used AddScoped method to register my dependencies. This is one of the three service lifetimes that you can use:

  • Transient (AddTransient)
  • Scoped (AddScoped)
  • Singleton (AddSingleton)

Those service lifetimes are pretty standard for any Dependency Injection container. Let’s have a quick look at what they are for.

Scoped lifetime – the most popular one. If you register your services as scoped, it will be created only once per request. It means, that whenever you use the same interface representing your dependency, the same instance will be returned within one request. Let’s have a look at this simple example:

Here when ProductController is called, it depends on ProductService and OrderService, which also depends on ProductService. In this case .Net Core 3 DI will resolve ProductService dependency twice, but it will create ProductService once and return the same object in both places. This will happen in scoped lifetime, because it is the same request. In most cases, you should be using this one.

Transient lifetime – if you register your service with a transient lifetime, you will get a new object whenever you fetch it as a dependency, no matter if it is a new request or the same one

Singleton lifetime – this is the most tricky one. Singleton is a design pattern that is very well known. In this pattern whenever you use an object, you will get the same instance of it, every time, even in a different request or a different thread. This is an invitation to problems and this is also why it is called an antipattern sometimes. With singleton lifetime you will get the same instance of your object every time, for the whole lifetime of your application. It’s not a bad idea to use singleton, but you need to implement it in a thread-sefe manner. It may be useful whenever the creation of an object is expensive (time or resource-wise) and you would rather keep it in memory for the next usage, then creating it every time. For example, you can use a singleton to create a service to send an email. Creating SmtpClient is expensive and can be done only once.

    public class EmailSenderService : IEmailSenderService
    {
        private readonly IConfiguration _configuration;

        private readonly SmtpClient _client;

        public EmailSenderService(IConfiguration configuration)
        {
            _configuration = configuration;

            var smtpServerAddress = _configuration.GetValue<string>("Email:smtpServerAddress");
            var smtpServerPort = _configuration.GetValue<int>("Email:smtpServerPort");
            _client = new SmtpClient(smtpServerAddress, smtpServerPort);
        }

        public async Task SendEmail(string emailAddress, string content)
        {
            var fromAddress = _configuration.GetValue<string>("Email:senderAddress");
            var message = new MailMessage(fromAddress, emailAddress);
            message.Subject = content;

            await _client.SendMailAsync(message);
        }
    }

And in Startup class:

services.AddSingleton<IEmailSenderService, EmailSenderService>();

Is built-in DI container enough?

This is an important question to ask. Microsoft did a great job in developing a Dependency Injection container, but there are several great solutions out there that are ready to use. Actually, Microsoft lists them on an official documentation page: https://docs.microsoft.com/en-gb/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-3.1#default-service-container-replacement

So what are popular DI containers you might try out?

  • Autofac
  • Lamar
  • Scrutor

And those are only a few, there are more. The important thing is how they are different from a built-in container. The obvious answer is that they offer more. So what Microsofts built-in container doesn’t offer?

  • property injection
  • custom lifetime management
  • lazy initialization
  • auto initialization based on name

I must admit, that I miss the last one the most. In SimpleInjector for .Net Core 2.1 it was possible to register dependencies if only they follow naming convention with an interface having the same name as implementing class, with preceding ‘I’. There was no need to write registration for 90% of cases. 

So what I would use?

I would use a built-in container, whenever I don’t need any specific features. Without 3rd party nuget packages code is cleaner and easier to understand. .Net Core 3 doest pretty good job and you probably won’t need anything else. 

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/TicketStore

ASP.Net Core 3 – pass parameters to actions

Passing parameters to actions is an essential part of building RESTful Web API. .Net Core offers multiple ways to pass parameters to methods, that represent your endpoints. Let’s see what they are.

Pass parameter as a part of an url

When passing a parameter in a url, you need to define a routing that would contain a parameter. Let’s have a look a the example:

    [Route("{daysForward}")]
    [HttpGet]
    public IActionResult Get(int daysForward)
    {
        var rng = new Random();
        return new JsonResult(new WeatherForecast
        {
            Date = DateTime.Now.AddDays(daysForward),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)]
        });
    }

 

This method returns a WeatherForecast for a single day in the future. DaysForward parameter represents how many days in advance weather forecast should be returned for. Notice that daysForward is a part of the routing, so a valid url to this endpoint will look like:

GET: weatherForecast/3

We can also use [FromRoute] attribute before variable type, but it will also work the same way by default.

   public IActionResult Get([FromRoute] int daysForward)

Pass parameter in a query string

This is a vary common method for passing additional parameters, because it does not require for us to change routing, so it is also backward compatible. It’s important if we were to change an existing solution.

Let’s look at a different method, that would return a collection of weather worecasts with a sorting option.

[HttpGet]
    public IEnumerable<WeatherForecast> Get([FromQuery]bool sortByTemperature = false)
    {
        var rng = new Random();
        var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = DateTime.Now.AddDays(index),
            TemperatureC = rng.Next(-20, 55),
            Summary = Summaries[rng.Next(Summaries.Length)]
        });

        if (sortByTemperature)
        {
            forecasts = forecasts.OrderByDescending(f => f.TemperatureC);
        }

        return forecasts;
    }

In this example we pass on sortByTemperature parameter which is optional. Notice that we use [FromQuery] attribute to indicate, that it’s a variable taken from query string. A url to this endpoint would look like this:

GET: weatherForecast?sortByTemperature=true

You can put many prameters like this:

GET: weatherForecast?key1=value1&key2=value2&key3=value3

Remember, that url needs to be encoded properly to work right. If you were to pass a parameter like this:

https://www.michalbialecki.com/?name=Michał Białecki

It will need to be encoded into:

https://www.michalbialecki.com/?name=Micha%C5%82%20Bia%C5%82ecki

Pass parameters with headers

Passing parameters in a request headers are less popular, but also widely used. It doesn’t show in a url, so it’s less noticeable by the user. A common scenario for passing parameters in a header would be providing credentials or a parent request id to enable multi-application tracking. Let’s have a look at this example:

    [HttpPost]
    public IActionResult Post([FromHeader] string parentRequestId)
    {
        Console.WriteLine($"Got a header with parentRequestId: {parentRequestId}!");
        return new AcceptedResult();
    }

In order to send a POST request, we would need to use some kind of a tool, I’ll use Postman:

Here you see that I specified headers and parentRequestId is one of them.

Pass parameters in a request body

The most common way to pass the data is to include it in a request body. We can add a header Content-Type with value application/json and inform the receiver how to interpret this body. Let’s have a look at our example:

    [HttpPost]
    public IActionResult Post([FromBody] WeatherForecast forecast)
    {
        Console.WriteLine($"Got a forecast for data: {forecast.Date}!");
        return new AcceptedResult();
    }

We use [FromBody] attribute to indicate, that forecast will be taken from request body. In .Net Core 3 we don’t need and serialize to deserialize json body to WeatherForecast object, it will work automatically. To send POST request, let’s use Postman once again:

Have in mind, that size of the request body is limited by the server. It can be anywhere between 1MB to 2GB. In ASP.Net Core maximum request body size is around 28MB, but that can be changed. What if I would like to send bigger files than that, over 2GB? Then you should look into sending content as a stream or sending it in chunks.

Pass parameters in a form

Sending content in a form is not very common, but it is the best solution if you want to upload a file. Let’s have a look at the example:

    [HttpPost]
    public IActionResult SaveFile([FromForm] string fileName, [FromForm] IFormFile file)
    {
        Console.WriteLine($"Got a file with name: {fileName} and size: {file.Length}");
        return new AcceptedResult();
    }

This method doesn’t really send a file, but it will successfully receive a file from the request. The interface IFormFile is used specifically for handling a file.

When sending a request we need to set Content-Type to application/x-www-form-urlencoded and it the Body part, we need to choose a file:

Let’s see what do we get when we debug this code:

And the file is correctly read. An interesting fact is, that with IFormFile we get not only binary data but also a file type and name. So you might ask why I send a file name separately? This is because you might want to name file differently on the server, then the one you are sending.

Hope you enjoyed this post, you can have a look at the code posted here on my Github:

https://github.com/mikuam/TicketStore

.Net Core Global Tools – your custom app from nuget package

I love .net core. It is an awesome concept and a great, light framework to work with. One essential part of the framework environment is a .Net Core CLI. It’s a set of cross-platform tools and commands that can create, build and publish you app. Along with the platform comes also Global Tools, a concept of a console application, that can be distributed as a nuget package. Today, I’m going to show you how to make one.

Creating a Global Tool

Creating a console application in .net core is trivial, but creating a Global Tools for me – wasn’t. I tried a simple way: create a console application and then make some amendments to make it a tool. It didn’t work out, so today I’m going to show you how to do it differently.

The first thing you would need is templates package. .Net Core does not support to create a global tool from a command, so you need to install it.

dotnet new –install McMaster.DotNet.GlobalTool.Templates

Once you do it, you can just create a new project:

dotnet new global-tool –command-name MichalBialecki.com.MikTrain

That worked perfectly and after just a few changes to Program.cs file, my tool was ready. You can check it out in my repo:

Next step is to create a nuget package. There are a few ways to do that, but I’ll go with the simplest one – using a command.

dotnet pack –output ./

After the build, I have a package, but it’s not yet ready to be sent to nuget.org. It misses a license and icon to pass the validation. I end up adding those files and edit metadata, to get it to pass through. To have a more visual look at things, I’m using NuGet Package Explorer. When I open MichalBialecki.com.MikTrain.1.0.4.nupkg I will see:

My nuspec file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
  <metadata>
    <id>MichalBialecki.com.MikTrain</id>
    <version>1.0.1</version>
    <authors>Michał Białecki</authors>
    <owners>Michał Białecki</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <license type="file">LICENSE.txt</license>
    <icon>icon.jpg</icon>
    <description>A sample console application to show Global Tools</description>
    <packageTypes>
      <packageType name="DotnetTool" />
    </packageTypes>
    <dependencies>
      <group targetFramework=".NETCoreApp2.2" />
    </dependencies>
  </metadata>
</package>

You might get some errors…

If you encounter NU1212 error, it might mean that you are missing packageType tag in your nuspec file. Check the one above.

If you encounter NU1202 error, you’re probably missing Microsoft.NETCore.Platforms dependency.

For more hints, go to this great article.

I added my package by the website and after the package is verified, you will see:

If it’s not available right away, you probably need to wait couple of hours. At least I had to 🙂

Installing an running an application

One nuget package is validated and checked by nuget.org, you can try to install you app. To do it, you need to use the command:

dotnet tool install -g MichalBialecki.com.MikTrain –version 1.0.4

Installing my tool gave me this result:

And now I can run my app with command: miktrain

What is it for?

A global tool is a console application that can be a handy tool. I’m sure that you have some of those in your organization. The power in Global Tools is the ability to install update from nuget package. Let’s say something changed in a tool and you just want the newest version. You just need to execute a command:

dotnet tool update -g miktrain

And your app is up to date. It now can be shared across the team and updates can be spread much more elegant. There’s no longer a need to shave a magic exe file 😛

 All code posted here you can find on my GitHub: https://github.com/mikuam/console-global-tool-sample

.Net Core – introduction

A .Net Core is a catchphrase that you can hear more and more often in both developer discussions and job offers. You probably already heard that it’s fast, simple and runs on multiple platforms. In this post, I’d like to sum everything up and highlight what I like the most about it.

Why new framework?

Microsoft successfully released a full .Net Framework more than a dozen years ago. In every version, there were new features released. What are the biggest drawbacks?

  • it runs and can only be developed on Windows
  • it’s not modular, you target a framework as a dependency, rather than multiple packages
  • ASP.NET is not fast enough compared to other frameworks
  • it’s not open-source, so the community cannot really contribute

Because of that Microsoft needed to introduce something new, that would need to be run on Windows, Linux, be small, fast and modular. This is why .Net Core was introduced. Version 1.0 was developed in parallel to .Net Framework 4.6. It was initially released in the middle of 2016 as a set of open-source libraries hosted on GitHub.

Image from:
https://codeburst.io/what-you-need-to-know-about-asp-net-core-30fec1d33d78

Cross-platform

Now software written in .Net Core can be run on Windows, Linux and MacOS. What’s even more important, you can also run or host .Net Core apps on mentioned operating systems. So from now on, your ASP.NET website does not need an IIS for hosting, it can now be hosted on Linux. You can even create a docker container out of it and deploy it in the cloud. This is great in terms of maintenance, scalability, and performance. It also means that you no longer need to have Microsoft infrastructure to host your .net app.

Modularity

In .Net Framework new features were added only with a new release. Since in .Net Core every library is a nuget package, your app does not need to be dependent on one library. You can have a different version of System.IO and System.Net.Http.

What’s more you install only basic dependency, that is Microsoft.NETCore.App and other can be installed depending on what you need. This will be reflected in smaller app size.

Open-source

From the very beginning .Net Core was published as a dozen of GitHub repositories. Now there are 179 of those! You are free to take a look at what features are coming in and how the core looks like. It also means that while writing code, you can go inside of a framework method to check out what’s going on underneath.

Around those repositories grew a community eager to develop .Net Core and they are constantly improving it. You can be a part of it! As in any GitHub repository, you can raise an issue if you notice a bug or create a pull-request to propose a solution.

Easier to write code

The biggest difference between .Net Framework and .Net Core is a project file. Instead of ProjectName.csproj in a XML format we now have much more compact ProjectName.json in JSON. I created two sample projects, one in .Net Core and one in .Net Framework, have a look:

As you can see – there are fewer dependencies in .NET Core. The project file looks like this:

A .Net Core project file only specified a target framework, nothing more is needed. This is an example of the simplest class library created with Visual Studio. When the project gets bigger, this difference is getting more noticeable. Great comparison of those two formats can be found here.

Performance

Since developers could build framework anew, they could test and tweak code, that supposes to be untouchable before. Also because of rebuilt request pipeline, .Net Core is really fast when it comes to handling HTTP requests. Take a look at this comparison between other hosting environments:

Image from:
https://www.ageofascent.com/2019/02/04/asp-net-core-saturating-10gbe-at-7-million-requests-per-second/

You can see that ASP.NET is the 3rd fastest web server and can handle 7 million requests per second. This is impressive!

What is .Net Standard?

A .Net Standard is a specification for .Net Apis, that will be available across all platforms. Have a look at the lower part of this architecture:

Image from:
https://gigi.nullneuron.net/gigilabs/multi-targeting-net-standard-class-libraries/

.Net Standard is implemented in .Net Framework, in CoreFx(.Net Core) and in Mono. That means that once you target your class library to .Net Standard it can be run on any platform, but you will have less Apis to choose from. However, if you target your library to .Net Core, you can use all the features of .Net Core, but it could be run only on .Net Core apps.

What to choose? In all my projects I’m targeting .Net Standard and for building standard back-end services it was always enough. From version 2.0 .Net Standard supports a lot of Apis and you shouldn’t have a problem making it work.

Can I use .Net Core from now on then?

Yes and no. I strongly recommend you to use it in your new projects, cause it has multiple advantages. However, porting some legacy libraries or websites can be a hassle. .Net Standard and .Net Core supports a lot of Apis, but not everything that was available in .Net Framework is now available. Also some 3rd party libraries will not be ported at all, so major change in your project might be needed. Some Microsoft libraries are already ported, some are still in progress and some are totally rewritten. This means that .Net Core is not backward compatible and porting libraries to it can be easy or not possible at all.

All in all using .Net Core is a step in the right direction, cause Microsoft puts a lot of work into its development.

Thanks for reading and have a nice day!

Code review #4 – in-memory caching

This is a post on a series about great code review feedback, that I either gave or received. You can go ahead and read the previous ones here: https://www.michalbialecki.com/2019/06/21/code-reviews/

The context

Caching is an inseparable part of ASP.net applications. It is the mechanism that makes our web pages loading blazing fast with a very little code required. However, this blessing comes with responsibility. I need to quote one of my favorite characters here:

Downsides come into play when you’re no longer sure if the data that you see are old, or new. Caches in different parts of your ecosystem can make your app inconsistent, incoherent. But let’s not get into details since it’s not the topic of this post.

Let’s say we have an API, that gets user by id and code looks like this:

[HttpGet("{id}")]
public async Task<JsonResult> Get(int id)
{
    var user = await _usersRepository.GetUserById(id);
    return Json(user);
}

Adding in-memory caching in .net core is super simple. You just need to add one line in the Startup.cs

And then pass IMemoryCache interface as a dependency. Code with in-memory caching would look like this:

[HttpGet("{id}")]
public async Task<JsonResult> Get(int id)
{
    var cacheKey = $"User_{id}";
    if(!_memoryCache.TryGetValue(cacheKey, out UserDto user))
    {
        user = await _usersRepository.GetUserById(id);
        _memoryCache.Set(cacheKey, user, TimeSpan.FromMinutes(5));
    }

    return Json(user);
}

Review feedback

Why don’t you use IDistributedCache? It has in-memory caching support.

Explanation

Distributed cache is a different type of caching, where data are stored in an external service or storage. When your application scales and have more than one instance, you need to have your cache consistent. Thus, you need to have one place to cache your data for all of your app instances. .Net net code supports distributed caching natively, by IDistributedCache interface.

All you need to do is to change caching registration in Startup.cs:

And make a few modifications in code using the cache. First of all, you need to inject IDistributedCache interface. Also remember that your entity, in this example UserDto, has to be annotated with Serializable attribute. Then, using this cache will look like this:

[HttpGet("{id}")]
public async Task<JsonResult> Get(int id)
{
    var cacheKey = $"User_{id}";
    UserDto user;
    var userBytes = await _distributedCache.GetAsync(cacheKey);
    if (userBytes == null)
    {
        user = await _usersRepository.GetUserById(id);
        userBytes = CacheHelper.Serialize(user);
        await _distributedCache.SetAsync(
            cacheKey,
            userBytes,
            new DistributedCacheEntryOptions { SlidingExpiration = TimeSpan.FromMinutes(5) });
    }

    user = CacheHelper.Deserialize<UserDto>(userBytes);
    return Json(user);
}

Using IDistributedCache is more complicated, cause it doesn’t support strongly types and you need to serialize and deserialize your objects. To not mess up my code, I created a CacheHelper class:

public static class CacheHelper
{
    public static T Deserialize<T>(byte[] param)
    {
        using (var ms = new MemoryStream(param))
        {
            IFormatter br = new BinaryFormatter();
            return (T)br.Deserialize(ms);
        }
    }

    public static byte[] Serialize(object obj)
    {
        if (obj == null)
        {
            return null;
        }

        var bf = new BinaryFormatter();
        using (var ms = new MemoryStream())
        {
            bf.Serialize(ms, obj);
            return ms.ToArray();
        }
    }
}

Why distributed cache?

Distributed cache has several advantages over other caching scenarios where cached data is stored on individual app servers:

  • Is consistent across requests to multiple servers
  • Survives server restarts and app deployments
  • Doesn’t use local memory

Microsoft’s implementation of .net core distributed cache supports not only memory cache, but also SQL Server, Redis and NCache distributed caching. It all differs by extension method you need to use in Startup.cs. This is really convenient to have caching in one place. Serialization and deserialization could be a downside, but it also makes it possible to make a one class, that would handle caching for the whole application. Having one cache class is always better then having multiple caches across an app.

When to use distributed memory cache?

  • In development and testing scenarios
  • When a single server is used in production and memory consumption isn’t an issue

If you would like to know more, I strongly recommend you to read more about:

  All code posted here you can find on my GitHub: https://github.com/mikuam/Blog