Does the usage of Thread.Sleep(…) warrant making the method Async in an API

Say you are developing an API where you have to use Thread.Sleep(…) since you are working with some device via a COM and you need to wait for the predefined amount of time before you can read from the device and there is no way around it. For example:

Rule in designing APIs in my head, is that you shouldn’t make something Async unless the underlying API(s) you are working with is also Async or uses BeginX EndX, continuations, i.e. something that is already asynchronous. By prematurely making something Async you are making an implementation decision for the consumer of the code. The code provided above doesn’t expose any such API.

Continue reading “Does the usage of Thread.Sleep(…) warrant making the method Async in an API”

Cache Consideration in Multi-Threaded Code

In parallel programs is very important to regard cache size and hit rates on a single CPU, but it’s even more important to consider how the caches of multiple processors/cores interact. Let’s consider a single representative example, which demonstrates the important cache optimisation and emphasizes the value of good tools when it comes to performance optimisation in general.

Let’s first examine the first sequential method, it performs the rudimentary task of summing all the elements in a two-dimensional array of integers and returns the result:

public static int MatrixSumSequential(int [,] matrix)
{
    int sum = 0;
    int rows = matrix.GetUpperBound(0);
    int cols = matrix.GetUpperBound(1);
    for(int i = 0; i < rows; i++)
    {
        for(int j = 0; j < cols; j++)
        {
            sum += matrix[i, j];
        }
    }
    return sum;  
}

We could have used TPL but let’s ignore the huge arsenal of tools TPL provides in our simple example. The following attempt at parallelisation may appear sufficiently reasonable to harvest the fruits of multi-core execution, and even implements a crude aggregation to avoid synchronisation on the shared sum variable:

public static int MatrixSumParallel(int [,] matrix)
{
    int sum = 0;
    int rows = matrix.GetUpperBound(0);
    int cols = matrix.GetUpperBound(1);
    const int THREADS = 4;
    int chunk = row / THREADS;
    int [] localSums = new int[THREADS];
    Threads [] threads = new Threads[THREADS];
    for(int i = = 0; i < THREADS; i++)
    {
        int start = chunk * i;
        int end - chunk * (1 + i);
        int threadNum = i;
        threads[i] = new Thread(() => {
            for(int row = start; row < end; r++)
            {
                for(int col = 0; col < cols; col++)
                {
                    localSums[threadNum] += matrix[row, col];
                }
            }
        });
        threads[i].Start();
        foreach(var thread in threads)
            thread.Join();
    }
    return localSums.Sum();
}

 

Executing each of the two methods several times on an i7 machine with 6 cores produced the following results for a 2,000 x 2,000 matrix of integers:

  • 325ms average for sequential method
  • 935ms for the parallel method. Three times as slow as the sequential method!

The obvious question is why?
This is not an example of too fine grained parallelism because the number of threads is only 4. However if you accept the premise that the problem is somehow the cache related, it would make sense to measure the number of cache misses introduced by the 2 methods above.

The Visual Studio profiler when sampling the execution of each method with a 2,000 x 2,000 matrix reported 963 exclusive samples in the parallel version and only 659 exclusive samples in the sequential version, the vast majority of samples being on the inner loop line that reads from the matrix.

Why would a line of code writing to localSums introduce so many cache misses in comparison to writing to sum local variable? The answer is that the writes to the shared array invalidate cache lines at other processors/cores, causing every += operating to be a cache miss.
When the processor writes to a memory location that is in the cache of another processor/core cache, the hardware causes a cache invalidation, that marks the cache line as invalid. Accessing that line results in a cache miss.

The moral of the story do not blindly introduce parallelization in a hope that that would also result in the performance increase. Always test both versions, you might be surprised at the results!

Low Latency Applications in C#

Introduction

For years, C# has been dismissed as the language of choice for low latency applications. I have been working in the financial industry for over 15 years and very often I heard that any low latency software such as HFT (High-Frequency Trading) had to be implemented in native languages such as C++.

Low Latency

I think the primary requirement for low latency is not just the performance but consistency and predictability of execution time. This is exactly why the “unpredictable” behaviour of GC puts developers off from using C# (and other .NET languages) in low latency scenarios. However, I have recently started to notice several positions being advertised where “Low Latency” featured together with one of the .NET languages. I have also worked as Trade Desk Algo Developer at Credit-Suisse developing various trading automation employing C# and .NET as my “weapons” of choice without any problems.
The language itself is not the only criteria for low latency applications. Very rarely (or never) low latency applications live in isolation. In the world of finance, you need to connect to various third-party services. Such as market data and order execution providers, and that is where the network connectivity and its performance also plays a significant role.
Another aspect of managed languages is the perceived overhead of various checks. In this article, I would try to examine to what degree these affect the performance.

Disadvantages

Garbage Collector

The standard argument is that CLR (Common Language Runtime) employs a non-deterministic way of cleaning up unused memory – Garbage Collection (GC). “Nondeterministic” in this context means that the developer has no direct control over when the memory is freed. It is up to the runtime (CLR) to decide when is the most appropriate time to perform the garbage collection, based on available memory and the application behaviour. Although the developer can initiate GC by calling the “Collect()” method, this practice is commonly discouraged.
The CLR GC employs a tracing method of identifying and freeing up memory, whereby the algorithm traverses a graph of objects allocated on the heap and identifies the ones that are not then used. The CLR has to pause some threads during GC to be able to identify, free up and compact memory. Because the memory is compacted (aka defragmented), all the live references have also got to be updated.
All of that takes time and more importantly when the GC will start, causing a small pause in the execution of the application. This behaviour deemed unacceptable for low latency applications.

Just In Time Compilation

Just In Time compilation (JIT) is an approach to compilation, where each method is compiled on demand. This leads to a slower start up times and occasional pauses, when a method is accessed for the first time. However, this is only applicable to “cold starts” and is easily solved by running NGen utility against the assemblies, which generates native code for the same assembly. There is also another way – manually “touching” the methods at the beginning of the program – forcing JIT to do its job once and for all, rather than on demand basis. But this is more like a hack.

Advantages

Garbage Collector

Despite its perceived disadvantages, GC eliminates a lot of potential bugs and enhances the productivity of a developer, who doesn’t have to be concerned about memory deallocation. This significantly reduces the possibility of memory related bugs and means that the software could be delivered much faster.
Because of the work being done by GC, memory allocations are extremely fast. One of the last stages of GC is memory compaction. Since the memory is compacted (there are few exceptions that leave “holes” in memory because of pinned object and Large Object Heap) allocation of new objects is extremely fast. All the CLR has to do it to advance the “next object pointer”. Whereas in C/C++ the memory would have to be scanned, to find an unused block for the new object.
Even though GC causes small pauses, they are usually unnoticeable. The GC team is working hard to make sure GC causes as little interruption to the program execution as possible. They have also added additional flavours of GC, giving the developer the option of specifying which behaviour they want depending on the application type. In addition more recent changes to the .NET Framework and C#, such as return by reference which opens the possibility of using large structs rather than reference types to save on a number of allocations. Before such practice would most probably backfire, as copying large structs is relatively expensive.

GC Flavours

Source: http://www.lybecker.com/blog/2007/04/03/garbage-collection-flavors/

Concurrent Workstation Garbage Collection

This mode is optimised for interactivity by frequent short garbage collects. Each collect is split up into smaller pieces and is interleaved with the managed applications threads. This improves responsiveness at the cost of CPU cycles. This is ideal for interactive desktop applications where a freezing application is an annoyance for the users and ideal CPU time is abundant when waiting for user input. Concurrent workstation mode improves the usability with perceived performance.
Note that interactive GC only applies for Gen2 (full heap) collects because Gen0 and Gen1 collects are in nature very fast.

Non-concurrent Workstation Garbage Collection

Non-concurrent workstation mode works by suspending managed application threads when a GC is initiated. It provides better throughput but might appear as application hiccups where everything freezes for the users.

Server Garbage Collection

In server mode, a managed heap and a dedicated garbage collector thread are created for each CPU. This means the each CPU allocates memory in its own heap, therefore, results in lock-free allocation. When a collect is initiated all the managed application threads are suspended and all the GC threads collect in parallel.
Another thing to note is that the size of the managed heap segments is larger in server mode than workstation mode. A segment is a unit of which the memory is allocated on the managed heap.

It is possible to choose the type of GC for a managed application in the configuration file. Under the element add one of the below three settings and depending on the number of CPU, the garbage collector will run in the configured mode. Garbage Collection type settings
Running a standalone managed application the GC mode is by default concurrent workstation. Managed application hosts like ASP.Net and SQLCLR run with Server GC by default

JIT

.NET Assemblies contain IL code which is not yet executable and requires one last step – compilation to native instructions by JIT. Since it’s done on a machine where the code is going to run and not on a build server, JIT compiler has all the information about the hardware, primarily the CPU, to tailor the generated native instructions to the particular CPU architecture and its features, such as extensions, additional registers etc. Theoretically, this is ought to yield faster code. I haven’t seen any benchmarks, but my guess would be that the difference is very marginal.

Less Control over Optimisations

Whereas in C++ you can embed assembler code, you cannot do so in C#. Personally, I don’t see the reason why assembler should be used in 2015 unless of course you’re writing something really low level.
Other features, like inlining, although present in a form of an attribute and a flag in C# – [MethodImpl(MethodImplOptions.AggressiveInlining)] it merely acts as a hint to the compiler. Aggressive inlining option tells it not to include the size of the method into the list of criteria when deciding whether to inline the method or not. Even so, there is no guarantee. CLR already does an excellent job of inlining methods or properties automatically where necessary. AggressiveInlining option should be used in very rare circumstances, forcing to inline large methods could lead to the less efficient use of CPU cache.

Automatic Checks

The beauty of managed code is that, together with the memory management, many checks have been introduced to eliminate the most common bugs.
One example is array boundary checks. The pro native language camp will say those checks significant performance. This is not exactly the case, while the checks are made, and this is an essential feature of CLR, they are not necessarily always performed at runtime. CLR will use static analysis to eliminate checks. Even if a check has to be performed on an array, the JIT team used a nice trick to reduce the number of instructions. Here is an example (x86):

cmp     EDX, dword ptr [ECX+4]         
jae     SHORT G_M60672_IG03            // unsigned comparison

EDX contains the array index, and [ECX+4] the length of the array. “jae” instruction performs the comparison that index < length. The trick here is that it seems that the code doesn’t check for negative index. JIT guys employ unsigned arithmetic properties and use “jae” instruction which performs an unsigned check. So, if EDX is, negative, casting it to an unsigned representation will yield the value of 2^31, thus causing it to fail validation.

Does it matter?

There is quite a long list of features in native languages that are not available in C#. The questions are whether it makes a significant difference.
I would argue that the productivity and the speed at which the software can be developed plays a more important role, especially in these days when it cheaper to throw in more powerful hardware rather than spending more time on writing, optimising and testing the code.
Low Latency is not only dependent on the language the application and the framework are written in, but on other factors such as the performance of the network. For instance, you might want to disable Nagle’s algorithms, which optimises the traffic over TCP at the expense of latency. This can easily be done in C# by setting Socket.NoDelay option to true
Equipped with decent hardware and sufficient memory GC pauses would be imperceptible and there are more components in the chain that could lead to latency than the code itself.

.NET – What is CLR ?

Ammar Hasayen - Blog

It is so interesting to break down how .NET framework works and uncover the internal components and functionalists. I love breaking things down so i can have better understanding about what I am dealing with.

In a previous blog post, we talked about what an Assembly is. In this blog post, I will be sharing my thoughts about the .NET run time, or CLR.

.NET Common Language Runtime “CLR” is a run-time environment and an execution engine, provided by the .NET framework that provides robust application support with a very small memory footprint and it performs its operations in a very fast way (15,000 managed method calls per second at 27.6MHz).

You can think of CLR as the interface between .NET applications and the operating system. This is way .NET applications are called (Managed Code), because they are managed by the CLR.

clr3

CLR Provides the following services for programming…

View original post 510 more words

Run Java 8 Code on .NET with IKVM

IKVM is a JVM built on top of the CLR that is working towards full compatibility. It runs on both .NET and Mono and, as of this release candidate, supports Java through version 8. For class libraries, it uses OpenJDK 8.

IKVM offers two modes. In dynamic mode, it runs Java applications directly just like any other virtual machine. In static mode, Java byte code is recompiled into .NET libraries and executables.

When working with Java code that is intended for running on IKVM, you can import .NET classes by prefixing the namespace with “cli.”. In order to satisfy the Java compiler, this requires generating the appropriate Java stubs using the ikvmstub utility.

Automatic Non-Deterministic Finalisation

The automatic mechanism cannot be deterministic, because it must rely on on the GC to discover whether the object is referenced or not. At times this behaviour is a show stopper. because temporary “resource leaks” or holding a shared resource locked for slightly longer than necessary might be unacceptable in an application. At others, it’s perfectly acceptable. I will focus on the scenarios where it is.

Any type can override protected Finilize method defined by System.Object to indicate that it required automatic finalisation. However the C# syntax for requesting automatic finalisation on a class A is to implement method ~A(). This method is called finiliser a must be invoked when the object is destroyed.

Incidentally any type can have a finalizer, even the value types. However the finalizer on the value type object will never be invoked.

Continue reading “Automatic Non-Deterministic Finalisation”

WCF Protection Level

WCF has a huge security component to it and encrypts and signs messages by default. It could be an overkill especially if you are debugging or transporting data using a secured channel and are trying to squeeze out every bit of performance.

WCFSecurity

To avoid this, you can just implement integrity when confidentiality is not a requirement. In such cases, WCF provides the facility to set the protection level on the message. Also note that protection levels can only be set for messages. WCF does not allow the disabling of protection levels for transport security. The following application file snippet illustrates how to achieve this using configuration files; the messages are required to be signed only before they are sent:

<bindings>
<wsHttpBinding>
<binding name=”test”>
<security mode=”Message”>
<message defaultProtectionLevel=”Sign”/>
</security>
</binding>
</wsHttpBinding>
</bindings>

You can also specify the protectionLevel property through code at ServiceContract and OperationContract as well. Message exchange patterns (MEPs) determine how the messages are sent from the sender to the receiver. WCF does implement security support for both one-way and request-reply MEPs. However, duplex MEPs are available only in WsDuaHttpBinding, NetTcpBinding, and NetNamedPipeBinding.

Breaking the CLR Type Safety

Programming fun

As you might have already heard .Net is type safe. Specifically talking CLR provides type safety at runtime, some .Net languages (example: C#) work with CLR to provide type safety at compile time.

So what does the term type safe mean?

Let’s find what Wikipedia says:

In computer science, type safety is the extent to which a programming language discourages or prevents type errors. A type error is erroneous or undesirable program behavior caused by a discrepancy between differing data types for the program’s constants, variables, and methods (functions), e.g., treating an integer (int) as a floating-point number (float). Blah blah blah..

Ok, Now we know that type safety has something to do with preventing the programmer from accessing the type as other than what it is.

Well, C# and CLR are pretty good in this, they work together to push you into the Pit…

View original post 379 more words

Blog at WordPress.com.

Up ↑