I was looking at testing some hosts on a network I was building to determine if they could communicate with each other over the network links, this was done using the “ping” command. In my testing I noticed the Time to Live (TTL) field of the “ping” was different for each Operating System base, (Windows, OSX and Unix/Linux). I’ve noticed the TTL field before during ping tests but I have never gave it a second thought that it could be used for OS detection, so I decided to write a blogpost about my research into how OS detection is actually performed.
|Operating System||TTL Value|
|Linux Kernel 2.4/2.6||64|
|OSX / iOS||255|
My initial thought that was OS detection could be performed simply by using a ping and determining the OS of the destination host based off the TTL value returned with the ping response. This was because each OS base have a different default TTL value, because the RFC 791 which is the RFC for IP does not define a standard default value for the TTL field. I’ve created a table above which contains the default TTL value of OS that I have tested. The TTL value is reduced by each hop the packet takes from source to destination and back, a hop being from one router to another router.
Example – Ping to Windows based host no hops in transit
Source > Destination > Source = TTL of 128
Example – Ping to Windows based host single hop in transit
Source > Router > Destination > Router > Source = TTL of 126
The TTL value is decreased each time the packet passes through a router, as the packet passes through the router twice, once on the way to the destination and once on the return the value is decreased by two. So my current thought at this time was it was possible to perform OS detection purely based off the TTL value of a host, I learnt quickly this was not the case. In my research of TTL value, I found out that it was possible to alter the default TTL value of the host in Unix systems which in turns meant Linux based and OSX based hosts. This could be done by modifying the value within this file “/proc/sys/net/ipv4/ip_default_ttl” or in a Windows environment by modifying “\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter” key in the “HKEY_LOCAL_MACHINE” tree in the Windows registry. The reasoning why this meant relying purely on the TTL value of a ping was not a viable solution for performing OS detection because:
- The default TTL value could be changed by an administrator to trick those who probe the host into thinking it is a different OS base to that it really is. This is more security through obscurity then an actual security mechanism.
- This method would not be viable for OS detection over the internet or a large network with multiple routers as the pathing the packet travels can not be controlled easily which means the packet could travel through unknown amount of routers altering the TTL value to a point in which the base OS of a host can be determined.
This doesn’t mean the TTL value can not be used to determine the OS of a host but it shouldn’t to be blindly trusted. Because my initial idea was found to be flawed and doesn’t give a proper and trusted method of OS detection I decided to learn how OS detection was actually performed. Using tools such as NMAP which have OS detection functions are able to identify the hosts base OS as well as the flavour of the OS such as Windows XP, Windows Vista, Windows 7, Ubuntu, FreeBSD and etc.
The tool NMAP uses TCP/IP stack fingerprinting for the OS function, which examines every bit of the TCP/IP which includes the TTL value as well as other fields in the response. Parts of the TCP protocol definition are left up to the implementation of the administrators and are not defined by a standard from a governing body. This means that most operating systems, and different versions of the same operating system have different defaults values for each part. By examining these values an individual could differentiate among various operating systems, and implementations of TCP/IP stack. The TCP/IP fields that may vary include the following:
- Initial packet size (16 bits)
- Initial TTL (8 bits)
- Window size (16 bits)
- Max segment size (16 bits)
- Window scaling value (8 bits)
- “don’t fragment” flag (1 bit)
- “sackOK” flag (1 bit)
- “nop” flag (1 bit)
These values may be combined to form a 67-bit signature, or fingerprint, for the target machine. Just inspecting the initial TTL value and window size fields is often enough to successfully identify an operating system. NMAP maintains a database, nmap-os-db, which compares the fingerprint of the host being scanned to fingerprints stored within the database for matches.
OS fingerprinting techniques fall into either one of these two categories passive OS fingerprinting, this involves monitoring the network traffic coming from the host being fingerprinted and analysing the packets, or the other category being active OS fingerprinting which involves active communication with the host being fingerprinted.
This is the end of my blogpost I am not going into how TCP/IP stack OS fingerprinting is performed as in my research I found a lot explanations about the techniques used and my initial plan going into writing this post was determining if TTL solely could be used for OS fingerprinting which I determined can be used but I would recommend other techniques over the TTL fingerprinting.