分享

NET Foundations - .NET execution model - VusCode - Coding dreams since 1998!

 xiao huan 2008-08-20

NET Foundations - .NET execution model

Today‘s post will continue the quest of answering the "How .NET works?" question exactly where previous post stopped, at .NET assembly structure. In case you haven‘t read that post already, I suggest you to do that before proceeding with this post

General level explanation

When assembly containing NET code gets executed first it (1) executes small piece of native code inside the module which only purpose is (2) to call the MSCorEE.dll assembly. MSCorEE  then (3) loads appropriate version of MSCorWks.dll , which is COM server contained inside of DLL which implements core NET functionality. Once the CLR would be loaded and running, MSCorEE (4) executes the method matching the entry point with token value defined in CLR header.

image

Now, when we have done the L100 answer, we can take a more detail look how NET execution model really works

Detail explanation

When a managed executable would start, Windows would examine the assembly PE header to determine wheatear to create a 32 or 64 -bit process and then process primary thread calls a method inside of MSCorEE.dll ("shim") file.

To get understanding on how that shim dll would be found, loaded and which method inside of it would be called, we would have to take a look at couple of things from dumpbin results shown in previous post.

We can see there that PE header optional header values section defines 0040274E value as module entry point.

 

If we would now take a look at RAW DATA section (in the same result.txt result of the last post) we would see that at 0040274E entry point address we would have next set of  bytes: FF 25 00 20 40 00.

image

These bytes roughly translate into a "jump indirect to 0040200" instruction.

To get understanding what that 0040200 value represents we would have to take a look at imports section of clr header from previous post:

 

As we can see there, clr header import section has an entry for mscoree.dll which therefore has to exist so itcould be loaded into the already created unmanaged process. This import section is the standard way how PE file is specifying which DLLs it depends (managed or non managed).

An important thing to  be noticed here  is that import address 402000 is the same one as the one we saw stored in raw data. Another thing we can see in mscoree import section isthat mscoree.dll entry function is called _CorExeMain, so previous interpretation of raw data bytes I‘ve used "jump indirect to 0040200" could be therefore represented as "jmp _CorExeMain (as it is represented on diagram)

MSCorEE.dll (Microsoft Component object runtime Execution Engine)

This dll is essential for functioning of all NET applications and it is located in %SystemRoot%\system32 directory so in case we would be asked

"How to perform simple file based test if NET framework is installed?"

we could just check if the mscoree.dll file exist in mentioned location and we would know if NET framework is installed on a given machine

Contrary to popular belief, CLR itself is not implemented in MSCorEE.dll. It is implemented in COM server contained MSCorWks.dll and we have separate versions of that file for each one of the framework installed.

For e.g. if we would have NET 1.1, NET 2.0, NET 3.5 the core CLR functionality would be located in

  • C:\Windows\Microsoft.NET\Framework\v1.0.4322    (NET 1.1 / CLR 1.0)
  • C:\Windows\Microsoft.NET\Framework\v2.0.50727  (NET 2.0/NET 3.0/NET3.5/CLR 2.0)

In case you are confused with wide scope of mscorwks second list item, you might have ask yourself next question:

"Which version of CLR are using NET 3.0 and NET 3.5 frameworks ?"

The answer is next one:

  • NET 3.0 = CLR 2.0 + WinFX (where WinFX=WCF + WPF+ WF)
  • NET 3.5 = CLR 2.0 + WinFX + Linq

Once the mscoree would determine appropriate version of CLR it would initialize it.

With CLR being loaded and running, mscoree _CorExeMain method would then load managed module data , retrieve from module CLR header MethodDef token of the Main method and then call that method.

From that moment, managed application is running and CLR takes care from that moment application module execution.

JIT compiler

Every IL instruction generated from C# code needs to be compiled to machine native CPU instructions before it can be executed. Due to .NET IL code orientation that is not been done during the compile time (although something like that can be done with NGen tool you would still need to deploy IL code ). Instead of that, NET utilizes just-in-time (JIT) compilation model which compiles on the fly lines of IL code which are about to execute.

Two major advantages of JIT compiling are:

  • compilation would produce native code specific to the CPU used on client machine executing the application
  • CLR could profile code execution paths and recompile the IL native code on the fly to increase performances

The most common question regarding the JIT compiler is in my experience:

"Explain me how this JIT compiler works?"

To give an answer on that question I‘ll use the same C# example we used in previous .NET foundations post

  1. namespace CSharp_ILCode  
  2. {  
  3.     class Program  
  4.     {  
  5.         static void Main(string[] args)  
  6.         {  
  7.             System.Console.WriteLine("Hello world!");  
  8.             Hello2();  
  9.         }  
  10.   
  11.         static void Hello2()  
  12.         {  
  13.             System.Console.WriteLine("Hello world 2x!");  
  14.         }  
  15.     }  
  16. }  

 

Just before _CorExeMain method of mscoree.dll would call the main method, CLR would make a list of all the types used in Main method. (In our little code example, there would be only one type - Console)

For each one of the detected types CLR would create internal data structure, something similar to data table containing all the methods of referenced types (that information would be retrieved from type metadata) 

In case of our Console type, that internal data table could look something like this

Method name Address
Beep ^JITCFunction
...  
Write ^JITCFunction
...  
WriteLine ^JITCFunction

As we can see in left column we have list of all methods of Console type. Right column initially contains a pointer to undocumented, internal CLR function which I would call here JITCFunction.

After that internal data table would be created,, mscoree would start executing main and the first line is calling of the WriteLine method.

CLR would try to get from internal data table the address where it can find native CPU instructions but because our internal data table contains only pointer to JITCFunction, CLR would make a call to JITCFunction.

That JITCFunction would be aware of what method caused call and it would then perform next steps:

  1. examine method metadata and retrieve it‘s IL code,
  2. verify that the IL code is safe to be executed
  3. compile the IL code into native CPU instructions
  4. store created native CPU instructions in new dynamically allocated memory block.
  5. update the internal data table by replacing in WriteLine address column JITCFunction pointer with a pointer to 4)

At the end of those steps, that internal data table would look like this

Method name Address
Beep ^JITCFunction
...  
Write ^JITCFunction
...  
WriteLine ^NativeInstructionsMemoryBlock


Once the update of internal data table would be complete, JITCFunction would jump to the address of native CPU instruction memory block which would result in our case with "Hello" text being shown in console screen

After seeing all this steps happening before every method JIT compilation, the next question always pop up

"Are managed applications much slower then native one due to performance hits caused by JIT compiler?"

The short answer is: No, they are not because the performance hit caused by JIT compiler is minimal and JIT compiler makes that up with its advantages (custom CPU compilation)

The explanation why JIT compiler caused hits are minimal could be explained the best if we would continue our code example walk through.

After the first line would be written in console, my little example would call the Hello2 method which contain only one line - calling the Console.WriteLine method.

The difference this time is that this time when CLR tries to find the address of where WriteLine method native CPU instructions are located, it would succeed and instead of whole set of JIT compile  steps it would just execute native CPU instructions already compiled during first WriteLine execution.

We can see that JIT compiler therefore "caches" native CPU instructions into dynamic memory, which means that compiled code would be available to all the code executing into the same AppDomain as long the application won‘t be terminated.

The nature of most applications is that it most of the time it performs repetitive calls to the same method, so the overall application performance could be treated in general as very close (if not the same or better) one to the performance of the native one

Conclusion

In this two posts I tried to give a quick overview with some crucial details which could help answering the "How NET works?" question. I hope you now realize how structure of NET assembly and execution model are two parts of the same experience cooperating and supporting each other. I tried also to give couple of side-answers on some of the smaller but still interesting questions

Next post in my development NET Foundations post series would be covering stack/heap related subjects: value type vs reference type, instance vs type members, why boxing is evil etc...

So, stay tuned :)

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多