Category Archives: Uncategorized

.Net Core Global Tools – your custom app from nuget package

I love .net core. It is an awesome concept and a great, light framework to work with. One essential part of the framework environment is a .Net Core CLI. It’s a set of cross-platform tools and commands that can create, build and publish you app. Along with the platform comes also Global Tools, a concept of a console application, that can be distributed as a nuget package. Today, I’m going to show you how to make one.

Creating a Global Tool

Creating a console application in .net core is trivial, but creating a Global Tools for me – wasn’t. I tried a simple way: create a console application and then make some amendments to make it a tool. It didn’t work out, so today I’m going to show you how to do it differently.

The first thing you would need is templates package. .Net Core does not support to create a global tool from a command, so you need to install it.

dotnet new –install McMaster.DotNet.GlobalTool.Templates

Once you do it, you can just create a new project:

dotnet new global-tool –command-name MichalBialecki.com.MikTrain

That worked perfectly and after just a few changes to Program.cs file, my tool was ready. You can check it out in my repo:

Next step is to create a nuget package. There are a few ways to do that, but I’ll go with the simplest one – using a command.

dotnet pack –output ./

After the build, I have a package, but it’s not yet ready to be sent to nuget.org. It misses a license and icon to pass the validation. I end up adding those files and edit metadata, to get it to pass through. To have a more visual look at things, I’m using NuGet Package Explorer. When I open MichalBialecki.com.MikTrain.1.0.4.nupkg I will see:

My nuspec file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
  <metadata>
    <id>MichalBialecki.com.MikTrain</id>
    <version>1.0.1</version>
    <authors>Michał Białecki</authors>
    <owners>Michał Białecki</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <license type="file">LICENSE.txt</license>
    <icon>icon.jpg</icon>
    <description>A sample console application to show Global Tools</description>
    <packageTypes>
      <packageType name="DotnetTool" />
    </packageTypes>
    <dependencies>
      <group targetFramework=".NETCoreApp2.2" />
    </dependencies>
  </metadata>
</package>

You might get some errors…

If you encounter NU1212 error, it might mean that you are missing packageType tag in your nuspec file. Check the one above.

If you encounter NU1202 error, you’re probably missing Microsoft.NETCore.Platforms dependency.

For more hints, go to this great article.

I added my package by the website and after the package is verified, you will see:

If it’s not available right away, you probably need to wait couple of hours. At least I had to 🙂

Installing an running an application

One nuget package is validated and checked by nuget.org, you can try to install you app. To do it, you need to use the command:

dotnet tool install -g MichalBialecki.com.MikTrain –version 1.0.4

Installing my tool gave me this result:

And now I can run my app with command: miktrain

What is it for?

A global tool is a console application that can be a handy tool. I’m sure that you have some of those in your organization. The power in Global Tools is the ability to install update from nuget package. Let’s say something changed in a tool and you just want the newest version. You just need to execute a command:

dotnet tool update -g miktrain

And your app is up to date. It now can be shared across the team and updates can be spread much more elegant. There’s no longer a need to shave a magic exe file 😛

 All code posted here you can find on my GitHub: https://github.com/mikuam/console-global-tool-sample

.Net Core – introduction

A .Net Core is a catchphrase that you can hear more and more often in both developer discussions and job offers. You probably already heard that it’s fast, simple and runs on multiple platforms. In this post, I’d like to sum everything up and highlight what I like the most about it.

Why new framework?

Microsoft successfully released a full .Net Framework more than a dozen years ago. In every version, there were new features released. What are the biggest drawbacks?

  • it runs and can only be developed on Windows
  • it’s not modular, you target a framework as a dependency, rather than multiple packages
  • ASP.NET is not fast enough compared to other frameworks
  • it’s not open-source, so the community cannot really contribute

Because of that Microsoft needed to introduce something new, that would need to be run on Windows, Linux, be small, fast and modular. This is why .Net Core was introduced. Version 1.0 was developed in parallel to .Net Framework 4.6. It was initially released in the middle of 2016 as a set of open-source libraries hosted on GitHub.

Image from:
https://codeburst.io/what-you-need-to-know-about-asp-net-core-30fec1d33d78

Cross-platform

Now software written in .Net Core can be run on Windows, Linux and MacOS. What’s even more important, you can also run or host .Net Core apps on mentioned operating systems. So from now on, your ASP.NET website does not need an IIS for hosting, it can now be hosted on Linux. You can even create a docker container out of it and deploy it in the cloud. This is great in terms of maintenance, scalability, and performance. It also means that you no longer need to have Microsoft infrastructure to host your .net app.

Modularity

In .Net Framework new features were added only with a new release. Since in .Net Core every library is a nuget package, your app does not need to be dependent on one library. You can have a different version of System.IO and System.Net.Http.

What’s more you install only basic dependency, that is Microsoft.NETCore.App and other can be installed depending on what you need. This will be reflected in smaller app size.

Open-source

From the very beginning .Net Core was published as a dozen of GitHub repositories. Now there are 179 of those! You are free to take a look at what features are coming in and how the core looks like. It also means that while writing code, you can go inside of a framework method to check out what’s going on underneath.

Around those repositories grew a community eager to develop .Net Core and they are constantly improving it. You can be a part of it! As in any GitHub repository, you can raise an issue if you notice a bug or create a pull-request to propose a solution.

Easier to write code

The biggest difference between .Net Framework and .Net Core is a project file. Instead of ProjectName.csproj in a XML format we now have much more compact ProjectName.json in JSON. I created two sample projects, one in .Net Core and one in .Net Framework, have a look:

As you can see – there are fewer dependencies in .NET Core. The project file looks like this:

A .Net Core project file only specified a target framework, nothing more is needed. This is an example of the simplest class library created with Visual Studio. When the project gets bigger, this difference is getting more noticeable. Great comparison of those two formats can be found here.

Performance

Since developers could build framework anew, they could test and tweak code, that supposes to be untouchable before. Also because of rebuilt request pipeline, .Net Core is really fast when it comes to handling HTTP requests. Take a look at this comparison between other hosting environments:

Image from:
https://www.ageofascent.com/2019/02/04/asp-net-core-saturating-10gbe-at-7-million-requests-per-second/

You can see that ASP.NET is the 3rd fastest web server and can handle 7 million requests per second. This is impressive!

What is .Net Standard?

A .Net Standard is a specification for .Net Apis, that will be available across all platforms. Have a look at the lower part of this architecture:

Image from:
https://gigi.nullneuron.net/gigilabs/multi-targeting-net-standard-class-libraries/

.Net Standard is implemented in .Net Framework, in CoreFx(.Net Core) and in Mono. That means that once you target your class library to .Net Standard it can be run on any platform, but you will have less Apis to choose from. However, if you target your library to .Net Core, you can use all the features of .Net Core, but it could be run only on .Net Core apps.

What to choose? In all my projects I’m targeting .Net Standard and for building standard back-end services it was always enough. From version 2.0 .Net Standard supports a lot of Apis and you shouldn’t have a problem making it work.

Can I use .Net Core from now on then?

Yes and no. I strongly recommend you to use it in your new projects, cause it has multiple advantages. However, porting some legacy libraries or websites can be a hassle. .Net Standard and .Net Core supports a lot of Apis, but not everything that was available in .Net Framework is now available. Also some 3rd party libraries will not be ported at all, so major change in your project might be needed. Some Microsoft libraries are already ported, some are still in progress and some are totally rewritten. This means that .Net Core is not backward compatible and porting libraries to it can be easy or not possible at all.

All in all using .Net Core is a step in the right direction, cause Microsoft puts a lot of work into its development.

Thanks for reading and have a nice day!

Code review #4 – in-memory caching

This is a post on a series about great code review feedback, that I either gave or received. You can go ahead and read the previous ones here: https://www.michalbialecki.com/2019/06/21/code-reviews/

The context

Caching is an inseparable part of ASP.net applications. It is the mechanism that makes our web pages loading blazing fast with a very little code required. However, this blessing comes with responsibility. I need to quote one of my favorite characters here:

Downsides come into play when you’re no longer sure if the data that you see are old, or new. Caches in different parts of your ecosystem can make your app inconsistent, incoherent. But let’s not get into details since it’s not the topic of this post.

Let’s say we have an API, that gets user by id and code looks like this:

[HttpGet("{id}")]
public async Task<JsonResult> Get(int id)
{
    var user = await _usersRepository.GetUserById(id);
    return Json(user);
}

Adding in-memory caching in .net core is super simple. You just need to add one line in the Startup.cs

And then pass IMemoryCache interface as a dependency. Code with in-memory caching would look like this:

[HttpGet("{id}")]
public async Task<JsonResult> Get(int id)
{
    var cacheKey = $"User_{id}";
    if(!_memoryCache.TryGetValue(cacheKey, out UserDto user))
    {
        user = await _usersRepository.GetUserById(id);
        _memoryCache.Set(cacheKey, user, TimeSpan.FromMinutes(5));
    }

    return Json(user);
}

Review feedback

Why don’t you use IDistributedCache? It has in-memory caching support.

Explanation

Distributed cache is a different type of caching, where data are stored in an external service or storage. When your application scales and have more than one instance, you need to have your cache consistent. Thus, you need to have one place to cache your data for all of your app instances. .Net net code supports distributed caching natively, by IDistributedCache interface.

All you need to do is to change caching registration in Startup.cs:

And make a few modifications in code using the cache. First of all, you need to inject IDistributedCache interface. Also remember that your entity, in this example UserDto, has to be annotated with Serializable attribute. Then, using this cache will look like this:

[HttpGet("{id}")]
public async Task<JsonResult> Get(int id)
{
    var cacheKey = $"User_{id}";
    UserDto user;
    var userBytes = await _distributedCache.GetAsync(cacheKey);
    if (userBytes == null)
    {
        user = await _usersRepository.GetUserById(id);
        userBytes = CacheHelper.Serialize(user);
        await _distributedCache.SetAsync(
            cacheKey,
            userBytes,
            new DistributedCacheEntryOptions { SlidingExpiration = TimeSpan.FromMinutes(5) });
    }

    user = CacheHelper.Deserialize<UserDto>(userBytes);
    return Json(user);
}

Using IDistributedCache is more complicated, cause it doesn’t support strongly types and you need to serialize and deserialize your objects. To not mess up my code, I created a CacheHelper class:

public static class CacheHelper
{
    public static T Deserialize<T>(byte[] param)
    {
        using (var ms = new MemoryStream(param))
        {
            IFormatter br = new BinaryFormatter();
            return (T)br.Deserialize(ms);
        }
    }

    public static byte[] Serialize(object obj)
    {
        if (obj == null)
        {
            return null;
        }

        var bf = new BinaryFormatter();
        using (var ms = new MemoryStream())
        {
            bf.Serialize(ms, obj);
            return ms.ToArray();
        }
    }
}

Why distributed cache?

Distributed cache has several advantages over other caching scenarios where cached data is stored on individual app servers:

  • Is consistent across requests to multiple servers
  • Survives server restarts and app deployments
  • Doesn’t use local memory

Microsoft’s implementation of .net core distributed cache supports not only memory cache, but also SQL Server, Redis and NCache distributed caching. It all differs by extension method you need to use in Startup.cs. This is really convenient to have caching in one place. Serialization and deserialization could be a downside, but it also makes it possible to make a one class, that would handle caching for the whole application. Having one cache class is always better then having multiple caches across an app.

When to use distributed memory cache?

  • In development and testing scenarios
  • When a single server is used in production and memory consumption isn’t an issue

If you would like to know more, I strongly recommend you to read more about:

  All code posted here you can find on my GitHub: https://github.com/mikuam/Blog

Postman the right way

Postman is a great tool to quickly create a request and run it against your API. This is a flexible tool created to make your work simpler. You can save you requests, define variables for different environments and share it with your team. Today I’ll show you the most useful and practical features of Postman.

Swagger is not good enough

Swagger is an API package, that eases working with API. It comes almost out of the box. With a little configuration, it exposes user-friendly interface alongside with your API. Postman, on the other hand, is an external tool to make requests to your API but does not interfere with it. It comes in handy in cases where Swagger is not enough. What’s the main advantage of using it is the fact, that you can store your requests and you don’t need to fill in the data for the request over and over again. In fact, both tools are great and can be used together, but with a slightly different purpose.

Let’s look at the example. This is how swagger looks under .net core:

And this is a Postman window with the same request:

Comparing to Swagger, Postman is much more compact and because it is an external tool, it offers much more.

The best things

I won’t list the whole documentation here, but I’m going to show you what I like the most and what I use on a day to day basis.

Simply provide authorization

In the Authorization tab you can easily define what you need and it will be added to you request. It supports whole list of authorization methods and frankly, I only used a few of them. For example if you need to provide basic authorization, you just need to put user and password, without a need to encode it in base64 manually.

Save request to a collection

Next, to the Send button, there is an option to save a request and put it in the collection. If you have different micro-services you work on, you can divide your requests by service, or by context and put them into different collections. You can even create nested folders to make separation of requests even greater.

Add variables

The great thing that I discovered recently is an ability to define your own variables. To do this you need to click settings icon on the right and choose Globals button at the bottom of a popup.

Then you will be able to define variables you can use later. I added a url for my service(document-service-url), since I will be using that in every request for my service.

You can use your variable, by putting its name in a double curly braces like in the image above. This is very convinient when suddenly you need to change a port in your url or a password in the basic authorization method.

Define environments

As you saw, the URL that I used is defined by variable, that I can use in every request, but only locally. What about other environments? Postman support that as well. To add an environment, just click on the settings icon on the right and add one.

Now when I choose Development environment, the value of my variable will be different.

Share you requests across the team

This is all great, but how can you send a request to a friend, so he can use it as well? The easiest this is export entire collection as a json file and you colleague can import it in his Postman.

The other thing that you can do is to share you collection. On the top there is a My workspace dropdown, where you can define teams.

Once you click on a team, Postman will load all of the teams collctions. Great thing about it is, that you can work with the same requests within a one team and keep them up to date as the work goes. This was a real game changer for me and in fact a motivation to write this post! There is however a little drawback – in a free version, you can only share 25 requests across the team, and this is not a lot.

To sum up – I am very impressed by Postman and the possibilities that it offers. It actually offers much more then I mentioned. I also like its visual side and simplicity. I hope you like it too 🙂

Code reviews

This is a series about great code reviews, that I either gave or received. Code reviews are crucial for code quality and I strongly recommend you to have it in your company. Two heads are always better than one, especially in an atmosphere of cooperation and mutual learning. I have pleasure and luck to work in such a team and I have to admit, that code review discussions are always appreciated.

Without further due, let’s get to it. Have a good read! 🙂

Have you ever got a great review feedback? If you do, feel free to write me! I bet it’s worth to share it with the community and readers of this blog.

Code review #3 – use threads like Mr Fowler

This is another post in a series about great code review feedback, that I either gave or received. It will always consist of 3 parts: context, review feedback and explanation. You can go ahead and read the previous ones here: https://www.michalbialecki.com/2019/06/21/code-reviews/. This post is about doing things the right way.

The context

In my daily job we have a custom scheduling service, that is very simple. It just makes POST requests without a body based on a CRON expression. Then it check if the request was successful. The problem occured when jobs were getting longer and longer so that they reached a request timeout and apperad to be failed, while on a receiver, they run just fine. We decided, that a receiver should just return a success and do it’s work in a separate thread. This is how I approached this task:

Instead of having call with an await:

[HttpPost("ExportUsers")]
public async Task<IActionResult> ExportUsers()
{
    var result = await _userService.ExportUsersToExternalSystem();
    return result ? Ok() : StatusCode(StatusCodes.Status500InternalServerError);
}

I added a simple wrapper with Task.Run

[HttpPost("ExportUsers")]
public IActionResult ExportUsers()
{
    Task.Run(
        async () =>
            {
                var result = await _userService.ExportUsersToExternalSystem();
                if (!result)
                {
                    // log error
                }
            });
            
    return Ok();
}

Review feedback

Start a new thread as David Fowler does.

Explanation

When using Task.Run a thread is blocked from a thread pool. This is not a bad idea, but it should be used wisely. The thread should be released in a reasonable timeframe, not blocked for a lifetime of an application. An anti-pattern here would be to start listening Service Bus messages inside Task.Run, which would block a thread forever.

After refactoring my code looks like this:

[HttpPost("ExportUsers")]
public IActionResult ExportUsers()
{
    var thread = new Thread(async () =>
    {
        var result = await _userService.ExportUsersToExternalSystem();
        if (!result)
        {
            // log error
        }
    })
    {
        IsBackground = true
    };
    thread.Start();

    return Ok();
}

You can look at the more detailed explanation by David Fowler here: https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/HEAD@%7B2019-05-16T19:27:54Z%7D/AsyncGuidance.md#avoid-using-taskrun-for-long-running-work-that-blocks-the-thread

There are many examples here that really makes you think about how to write better code, so I strongly encourage you to read it. Enjoy! 🙂

Code review #2 – remember about your awaits

This is another post about great code review feedback, that I either gave or received. It will always consist of 3 parts: context, review feedback and explanation. You can go ahead and read the previous ones here: https://www.michalbialecki.com/2019/06/21/code-reviews/. This time I’m going to show you a very simple bug I found.

The context

The problem occurred when I was investigating a code with multiple available calls. And after some refactoring, it came down to something like this.

public async Task InsertUser()
{
    try
    {
        // async stuff here

        client.PostAsync("http://localhost:49532/api/users/InsertMany", null).ConfigureAwait(false);

        // more async cally 
    }
    catch (Exception e)
    {
        Console.WriteLine(e);
        throw;
    }
}

The thing was, that my call to InsertMany endpoint was not working somehow, and I didn’t catch any exceptions in this code. Do you know what’s wrong with it?

Review feedback

Oh, you haven’t awaited this one!

Explanation

That’s correct. There is a try-catch block that should catch every exception, but when async call is not awaited, then a Task is returned. So as a result no call would be made. This is potentially dangerous, because code will compile. IDE will give you a hint, but it can be easily overlooked.

As a good practice you can check if a call was a success:

public async Task InsertUser()
{
    try
    {
        // async stuff here

        var response = await client.PostAsync("http://localhost:49532/api/users/InsertMany", null).ConfigureAwait(false);
        response.EnsureSuccessStatusCode();

        // more async cally 
    }
    catch (Exception e)
    {
        Console.WriteLine(e);
        throw;
    }
}

So take a good care about you async / await calls 🙂

Bulk insert in Dapper

Dapper is a simple object mapper, a nuget package that extends the IDbConnection interface. This powerful package come in handy when writing simple CRUD operations. The thing I struggle from time to time is handling big data with Dapper. When handling hundreds of thousands of objects at once brings a whole variety of performance problems you might run into. I’ll show you today how to handle many inserts with Dapper.

The problem

Let’s have a simple repository, that inserts users into DB. Table in DB will look like this:

Now let’s have a look at the code:

public async Task InsertMany(IEnumerable<string> userNames)
{
    using (var connection = new SqlConnection(ConnectionString))
    {
        await connection.ExecuteAsync(
            "INSERT INTO [Users] (Name, LastUpdatedAt) VALUES (@Name, getdate())",
            userNames.Select(u => new { Name = u })).ConfigureAwait(false);
    }
}

Very simple code, that takes user names and passes a collection of objects to Dapper extension method ExecuteAsync. This is a wonderful shortcut, that instead of one object, you can pass a collection and have this sql run for every object. No need to write a loop for that! But how this is done in Dapper? Lucky for us, Dapper code is open and available on GitHub. In SqlMapper.Async.cs on line 590 you will see:

There is a loop inside the code. Fine, nothing wrong with that… as long as you don’t need to work with big data. With this approach, you end up having a call to DB for every object in the list. We can do it better.

What if we could…

What if we could merge multiple insert sqls into one big sql? This brilliant idea gave me my colleague, Miron. Thanks, bro!:) So instead of having:

We can have:

The limit here is 1000, cause SQL server does not allow to set more values in one insert command. Code gets a bit more complicated, cause we need to create separate sqls for every 1000 users.

public async Task InsertInBulk(IList<string> userNames)
{
    var sqls = GetSqlsInBatches(userNames);
    using (var connection = new SqlConnection(ConnectionString))
    {
        foreach (var sql in sqls)
        {
            await connection.ExecuteAsync(sql);
        }
    }
}

private IList<string> GetSqlsInBatches(IList<string> userNames)
{
    var insertSql = "INSERT INTO [Users] (Name, LastUpdatedAt) VALUES ";
    var valuesSql = "('{0}', getdate())";
    var batchSize = 1000;

    var sqlsToExecute = new List<string>();
    var numberOfBatches = (int)Math.Ceiling((double)userNames.Count / batchSize);

    for (int i = 0; i < numberOfBatches; i++)
    {
        var userToInsert = userNames.Skip(i * batchSize).Take(batchSize);
        var valuesToInsert = userToInsert.Select(u => string.Format(valuesSql, u));
        sqlsToExecute.Add(insertSql + string.Join(',', valuesToInsert));
    }

    return sqlsToExecute;
}

Lets compare!

Code is nice and tidy, but is it faster? To check it I uesd a local database and a simple users name generator. It’s just a random, 10 character string.

public async Task<JsonResult> InsertInBulk(int? number = 100)
{
    var userNames = new List<string>();
    for (int i = 0; i < number; i++)
    {
        userNames.Add(RandomString(10));
    }

    var stopwatch = new Stopwatch();
    stopwatch.Start();

    await _usersRepository.InsertInBulk(userNames);

    stopwatch.Stop();
    return Json(
        new
            {
                users = number,
                time = stopwatch.Elapsed
            });
}

I tested this code for 100, 1000, 10k and 100k. Results surprised me.

The more users I added, the best performance gain I got. For 10k users it 42x and for 100k users it’s 48x improvement in performance. This is awesome!

It’s not safe

Immediately after posting this article, I got comments from you, that this code is not safe. Joining raw strings like that in a SQL statement is a major security flaw, cause it’s exposed for SQL injection. And that is something we need to take care of. So I came up with the code, that Nicholas Paldino suggested in his comment. I used DynamicParameters to pass values to my sql statement.

public async Task SafeInsertMany(IEnumerable<string> userNames)
{
    using (var connection = new SqlConnection(ConnectionString))
    {
        var parameters = userNames.Select(u =>
            {
                var tempParams = new DynamicParameters();
                tempParams.Add("@Name", u, DbType.String, ParameterDirection.Input);
                return tempParams;
            });

        await connection.ExecuteAsync(
            "INSERT INTO [Users] (Name, LastUpdatedAt) VALUES (@Name, getdate())",
            parameters).ConfigureAwait(false);
    }
}

 

This code works fine, however it’s performance is comparable to regular approach. So it is not really a way to insert big amounts of data. An ideal way to go here is to use SQL Bulk Copy and forget about Dapper.

  All code posted here you can find on my GitHub: https://github.com/mikuam/Blog

I know that there is a commercial Dapper extension, that helps with bulk operations. You can have a look here. But wouldn’t it be nice, to have a free nuget package for it? What do you think?

Code review #1 – dapper and varchar parameters

This is a first post about great code review feedback, that I either gave or received. It will always consist of 3 parts: context, review feedback and explanation. You can go ahead and read previous ones here: https://www.michalbialecki.com/2019/06/21/code-reviews/. So lets not wait anymore and get to it.

The context

This is a simple ASP.Net application, that is requesting database to get count of elements filtered by one parameter. In this case we need a number of users providing a country code, that is always two character string.

This is how DB schema looks like:

Code in .net app is written with Dapper nuget package, that extends functionalities of IDbCommand and offers entities mapping with considerably good performance. It looks like this:

public async Task<IEnumerable<UserDto>> GetCountByCountryCode(string countryCode)
{
    using (var connection = new SqlConnection(ConnectionString))
    {
        return await connection.QueryAsync<UserDto>(
            "SELECT count(*) FROM [Users] WHERE CountryCode = @CountryCode",
            new { CountryCode = countryCode }).ConfigureAwait(false);
    }
}

Looks pretty standard, right? What is wrong here then?

Review feedback

Please convert countryCode parameter to ANSI string in GetCountByCountryCode method, cause if you use it like that, it’s not optimal.

Explanation

Notice, that CountryCode in database schema is a varchar(2) and this means that it stores two 1-byte characters. On the contrary nvarchar type is 2-byte per character type, that can store multilingual data. When using .net String type we are using unicode strings by default and therefore if we pass countryCode string to SQL it will have to be converted to ANSI string first.

The correct code should look like this:

public async Task<IEnumerable<UserDto>> GetCountByCountryCodeAsAnsi(string countryCode)
{
    using (var connection = new SqlConnection(ConnectionString))
    {
        return await connection.QueryAsync<UserDto>(
            "SELECT count(*) FROM [Users] WHERE CountryCode = @CountryCode",
            new { CountryCode = new DbString() { Value = countryCode, IsAnsi = true, Length = 2 } })
            .ConfigureAwait(false);
    }
}

If we run SQL Server Profiler and check what requests are we doing, this is what we will get:

As you can see first query needs to convert CountryCode parameter from nvarchar(4000) to varchar(2) in order to compare it.

In order to check how that would impact the performance, I created a SQL table with 1000000(one milion) records and compared results.

Before review it took 242 miliseconds and after review it took only 55 miliseconds. So as you see it is more that 4 times performance improvement in this specific case.

  All code posted here you can find on my GitHub: https://github.com/mikuam/console-app-net-core

Perfect console application in .net Core: add unit tests

Unit test are crucial part in software development process. In late 1990s Kent Beck stated that writing tests is the most important part of writing software in ExtremeProgramming metodology. You can read a bit more about it in Martins Fowler article.

This is a part of a series of articles about writing a perfect console application in .net core 2. Feel free to read more:

My example

To have a simple example how to add tests to .net core console application I created a TicketStore app. It is a console app to reserve tickets in cinema. It’s structure looks like this:

Here is how command handling looks like:

    command = Console.ReadLine();
    if (!_commandValidator.IsValid(command))
    {
        Console.WriteLine($"Sorry, command: '{command}' not recognized.");
    }

And CommandValidator looks like this:

As you noticed validator contains regular expression and parsing logic, that can always be faulty. We only allow a column to be from A to H and seat to be between 1 and 15. Let’s add unit test to be sure that it works that way.

Adding tests project

Adding a test project

Project name that I would like to test is MichalBialecki.com.TicketStore.Console so I need to add a class library with name
MichalBialecki.com.TicketStore.Console.Tests.

Adding unit tests packages

To write unit tests I’m adding my favourite packages:

  • NUnit – unit test framework
  • NUnit3TestAdapter – package to run tests
  • NSubstitute – mocking framework
  • Microsoft.NET.Test.Sdk – it’s important to remember about this one, tests would not run without it

Now we can start writing tests.

First test

I added a CommandValidatorTests and now my project structure looks like this:

And test looks like this.

    using MichalBialecki.com.TicketStore.Console.Helpers;
    using NUnit.Framework;

    [TestFixture]
    public class CommandValidatorTests
    {
        private CommandValidator _commandValidator;

        [SetUp]
        public void SetUp()
        {
            _commandValidator = new CommandValidator();
        }

        [TestCase("A1", true)]
        [TestCase("A15", true)]
        [TestCase("A11", true)]
        [TestCase("H15", true)]
        [TestCase("H16", false)]
        [TestCase("K15", false)]
        [TestCase("I4", false)]
        [TestCase("K.", false)]
        [TestCase("", false)]
        [TestCase(null, false)]
        public void IsValid_GivenCommand_ReturnsExpectedResult(string command, bool expectedResult)
        {
            // Arrange & Act
            var result = _commandValidator.IsValid(command);

            // Assert
            Assert.AreEqual(expectedResult, result);
        }
    }

In Resharper unit tests sessions window all tests passed.

Notice how test results are shown – everything is clear from the first sight. You can immidiately see what method is tested and with what conditions.

If you’re interested what are the best practices to write unit tests, have a look at my article: https://www.michalbialecki.com/2019/01/03/writing-unit-tests-with-nunit-and-nsubstitute/. It will guide you through the whole process and clearly explaining best practices.

  All code posted here you can find on my GitHub: https://github.com/mikuam/console-app-net-core