Thursday, April 13, 2017

Peculiarities of a New Windows TCP IP Stack

Peculiarities of a New Windows TCP IP Stack


Starting with Windows Vista, Microsoft has switched its operating systems to a new network stack — Next Generation TCP/IP Stack. The stack is stuffed with various perks: Windows Filtering Platform, a scalable TCP window and other delicacies. But it’s not them we will be talking about, but a specific behavioral pattern of the new network stack.

Any self-respecting network scanner should be able to detect an operating system used on the host being scanned. The more parameters it uses for this purpose, the more accurate the result is. For example, Nmap employs a wide range of metrics: various TCP metrics (the timestamp values behavior, re-ordering TCP options), IP metrics (an algorithm for a packet order number calculation, processing of IP packet flags) and other metrics.

In Positive Technologies, we also collect metrics and detect OS versions. So, I’d like to tell you about a metric I’ve found recently. This metric allows detecting Windows systems that use the new stack. The detection method is based on the analysis of ICMP Timestamp responses. ICMP Timestamps is a distant ancestor of synchronization protocols that allows requesting time of a remote system. A request and response structure of ICMP Timestamp is provided in Figure 1.


Figure 1. A request and response structure of ICMP Timestamp.

The bars in red are for a standard ICMP header. Below are the timestamp bars that indicate the moment when the request was sent, the moment when the request was received by a remote user, and the moment when the response was sent by the remote user. Timestamps are the amount of milliseconds having passed from midnight till UTC. If the host is incapable of sending its time data accurate to millisecond, it should set its high bit to 1 and send at list something as time data. We are interested in the last timestamp. The fact is that Microsoft, as it tends to be, was quite frivolous about the RFC implementation: instead of transmitting the network byte order, the system sends a temporary timestamp with the host byte order without setting the high bit to 1, though it does send the timestamps accurate to a second. Moreover, starting from Vista, the Windows systems, for some inexplicable reason, do some crazy things with timestamps.

Figure 2 describes a standard behavior of timestamps. The blue line represents timestamps received from the server, while the red one represents the server time. The X axis stands for the time passed since the experiments started; the Y axis represents timestamp values in seconds. The line means that the network byte order is converted to the host order.

Figure 2 — A standard timestamp behavior.

However, the actual behavior was a far cry from all standards. Take a look at Figure 3 (the time resolution is higher).
Figure 3 — Timestamp behavior in a higher time resolution.

As you can see, every second the value of the transmitted timestamp exceeds the true one by 10000 seconds. The gaps on the graph are a mere result of the Windows scheduler decision to break off our program.

It might be that Windows transmits timestamps this way to baffle the attacker who’s trying to find out the remote host time. This behavior is typical only for Windows systems with the new stack, which allows employing the new way of detecting such systems.

Thank you for your attention.

Author: Stanislav Kirillov, Positive Research 

Available link for download