Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> need to change to a more-than-64-bit architecture

Perhaps we don't need a single flat address space with byte-addressable granularity at those sizes?

I wonder how an 8 bit byte, 48 bit word system would have fared. 2*32 is easy to exhaust in routine tasks; 2*48 not so much.



Until Intel's Ice Lake server processors introduced in 2019, x86-64 essentially was a 48-bit address architecture: addresses are stored in 64-bit registers, but were only valid if the top two bytes were sign-extended from the last bit of the 48-bit address. Now they support 57 bit addressing.


True. However what I had in mind there was something along the lines of 48-bit integer and fp arithmetic as the "native" size with 96-bit as the much more limited extended form that 128-bit currently fulfills for x86. For the address space regular 48-bit pointers would address the needs of typical applications.

Extended 96-bit pointers could address the (rather exotic) needs of things such as distributed HPC workloads, flat byte addressable petabyte and larger filesystems, etc. Explicitly segmented memory would also (I assume) be nice for things like peripheral DMA, NUMA nodes, and HPC clusters. Interpreters would certainly welcome space for additional pointer tag bits in a fast, natively supported format.

Given the existence of things like RIP-relative addressing and the insane complexity of current MMUs such a scheme seems on its face quite reasonable to me. I don't understand (presumably my own lack of knowledge) why 64-bit was selected. As you point out addresses themselves were 48-bit in practice until quite recently.


> Perhaps we don't need a single flat address space with byte-addressable granularity at those sizes?

History is filled with paging schemes in computers (e.g. https://en.wikipedia.org/wiki/Physical_Address_Extension). Usually people do this initially as it allows one to access more space without requiring a change of all software, it is an extension to an existing software paradigm, but once the CPU can just address it all as a single linear space, it simplifies architectures and is preferred.


Fair enough. I suppose once any scheme receives full native support it becomes indistinguishable from a flat address space anyway. What's a second partition if not an additional bit?

My pondering is less "why such a large address space" and more "why such a large native word size"? Extra bits don't come without costs.

As long as I'm asking ridiculous questions, why not 12-bit bytes? I feel like a 12/48 system would be significantly more practical for the vast majority of everyday tasks. Is it just due to inertia at this point or have I missed some fundamental observation?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: