A machine dependent high level language provides several data types like char, int and long. However, Compilers/Languages are not free to decide the size of the data types. The reason is that the Language has to call OS's system API providing expected inputs of data. Thus the size of any data type provided by compiler must match with the data types expected by OS and its System API. Otherwise, System call will fail with some error and will not get expected result. Isn't it?
Let's consider a system call which expects first argument as 'int' (4 byte) and second argument as 'char' (1 byte) as input. In order to call this API, the compiler have to have a 'int' data type of 4 byte and a 'char' data type of 1 byte to call that system API. Isn't it? Suppose, the compiler has 'int' data type of 1 byte and 'char' of 4 byte. Now when the call to the OS API is made, it will fail as it is expecting first argument as 'int' (4 byte) and second argument as 'char' (1 byte) data. Isn't it? It is obvious that API will fail with an error. In such case, as a workaround, we can pass an 'int' (as its size if 1 byte in our compiler) as fist argument and a 'char' as second argument. Doing this will be a big issue programing issue and it will cause any confusions and issues. That's the reason, compiler's data type must match with the data types chosen by the Operating System.
There will be many more issues if the size of any data type (provided by compiler) is not same as the OS data types. To avoid such issues, data models which specify the size of each data type are introduced and standardized. The compiler is enforced to follow data model chosen by the Operating System.
Every application and every operating system has an abstract data model. Many applications do not explicitly expose this data model, but the model guides the way in which the application's code is written.
The table below details the data types of several data models for comparison purposes.
Data model | char | short (integer) | Int | Long (integer) | long long | pointer/ size_t | Sample operating systems |
LP32 | 8 | 16 | 16 | 32 | 32 | Win-16, Apple Macintosh | |
ILP32 | 8 | 16 | 32 | 32 | 32 | 32-bit UNIX | |
LLP64/ IL32P64 | 8 | 16 | 32 | 32 | 64 | 64 | |
LP64/ I32LP64 | 8 | 16 | 32 | 64 | 64 | ||
ILP64 | 8 | 16 | 64 | 64 | 64 | ||
SILP64 | 8 | 64 | 64 | 64 | 64 |
Many 64-bit compilers today use the LP64 model (including Solaris, AIX, HP-UX, Linux, Mac OS X, FreeBSD, and IBM z/OS native compilers). Microsoft's Visual C++ compiler uses the LLP64 model.
Please note that the size of ’long long’ is 64 bit on 32 bit and 64 bit machine/OS. In the C99 version of the C programming language and the C++11 version of C++, a ‘long long’ type is supported that doubles the minimum capacity of the standard long to 64 bits. This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the ‘long long’ type did not exist in C++03. Microsoft Windows VC++ supports it. However, some compilers may not support it.
References:
http://en.wikipedia.org/wiki/64-bit
http://www.unix.org/whitepapers/64bit.html
References:
http://en.wikipedia.org/wiki/64-bit
http://www.unix.org/whitepapers/64bit.html
The C/C++ data model accommodates 32 and 64-bit systems, addressing memory differently. Games Play Way In 32-bit, memory addresses are 32 bits long, limiting addressable memory. 64-bit expands this to 64 bits, allowing larger memory capacity.
ReplyDelete