Low performance recognition: a new start?

While we have achieved great performance gains in other areas, .NET still lags behind other platforms in data access benchmarks.

As an example, the TechEmpower Fortunes benchmark shows similar numbers across raw ADO.NET, Dapper and EF Core variations: throughput flatlines or even decreases as concurrent requests increase, which seems to indicate that bottlenecks are preventing efficient hardware utilization.

The performance of accessing databases using .NET should be much more competitive, and we (several people from Microsoft and the .NET developer community) are going to do something about it.

via GitHub – aspnet/DataAccessPerformance: Benchmarks, prototypes and discussions for database access performance investigation

.NET Core 2.1

.Net Core 2.1 is coming (94% complete – .NET Core 2.1, next major release) : a preview is supposed to be available in Q1 2018 with a GA in Q2 (see roadmap).

Please do not mix up .Net Core 2.1 and .NET Core SDK 2.1.2: The latter SDK includes the previously released .NET Core 2.0.3 Runtime and is not related to .Net Core 2.1. Confusing isn’t it?

After the scrambling genesis of .Net Core 2.0 (remember of project.json turn-around responsible for poor tooling support), the master word of 2.1 is strengthening and efficiency.

.Net Core 2.1

  • Spans/Memory, recently introduced in C# 7.2, will be generalized (see technical presentation of Adam Sitnik in .NET Core: Performance Storm or this serie from Immo Landwerth: video 1, video 2 and  video 3). The main goal is to reduce heap allocations :
    • limit useless memory allocation (why allocate a new string with a seperate copy of data in order to read http request header when complete header is already in memory?).
    • limit useless string format conversion (web is mainly utf8 whereas .Net is Unicode) but  Utf8String are not supposed to be maide available in 2.1.
    • favor allocation free algorithm.
  • Large use of Span/Memory. From Add initial Span/Buffer-based APIs across corefx, we can see it affect large part of BCL:
      • System.BitConverter
      • System.Convert
      • System.Random
      • Primitive Parse methods
      • System.Guid
      • System.String
      • System.IO.Stream
      • System.IO.BufferStream and System.IO.ReadOnlyBufferStream
      • System.IO.TextReader and System.IO.TextWriter
      • System.IO.BinaryReader and System.IO.BinaryWriter
      • System.IO.File
      • System.Text.StringBuilder
      • System.Text.Encoding
      • System.Numerics
      • System.Net.IPAddress
      • System.Net.Sockets
      • System.Net.WebSockets.WebSocket
      • System.Net.Http
      • System.Security.Cryptography
  • Integration of Microsoft.Windows.Compatibility (see Ship .NET Framework compatibility pack). For a detailled description of Microsoft.Windows.Compatibility, you can see Announcing the Windows Compatibility Pack for .NET Core.
  • Stacktrace improvements in .NET Core 2.1 as demonstrated by teams of Age of Ascent.
  • Garbage collection customization (if the GC is less involved, it becomes possible to use more lazy algorithms). After a first clue last summer recent news suggests that Garbage collection customization will be standard in 2.1 paving the way to specific implementations (and maybe open source versions). For now, a zero GC have been demonstrated.

Asp.Net Core 2.1

  • ManagedHandler : a new managed implementation to replace WinHTTP on Windows and libcurl on Unix/OSX.

EF Core 2.1

EF Core continues to recover the missing features (roadmap is here):

  • GroupBy support
  • Better Transaction support
  • SqlGeometry, SqlGeography
  • CosmoDB provider
  • Improved AppInsight monitoring


How to implement a formatter in Asp.Net Core

How to handle raw data in request body either manually or by registering a custom InputFormatter: Accepting Raw Request Body Content in ASP.NET Core API Controllers – Rick Strahl’s Web Log.

One big issue with automatic conversion between content body and an action parameter is how to handle too big content (in case of DOS for example). Reading any content body into a byte array could be very dangerous for server health in case of a large body.  This also prevents an always preferable handling using the streamed way.