However, since not all platforms will allow the entire physical address
space to be simultaneously mapped to part of the virtual address space,
we may still require some dynamic mapping.
Namely, the global allocator API in Rust, actually only returns a null
pointer on failure, rather than wrapping it in a Result, which AllocRef
does. Since Box::from_raw(null) is direct UB, this can in theory lead to
very strange behavior.
Note that this is very preliminary, and I merely got my already freezing
kernel branch not to triple fault, but I would probably apply this patch
to upstream.
What is changed here, is that rather than relying on recursive mapping
for accessing page table frames, it now uses linear translation
(virt=phys+KERNEL_OFFSET). The only problem is that the paging code now
makes assumptions that the entire physical address space remains mapped,
which is not necessarily the case on x86_64 architecturally, even though
systems with RAM more than a PML4 are very rare. We'd probably lazily
(but linearly) map physical address space using huge pages.
Additionally, because it turned out to be infeasible to rely on
link-time constants in global_asm! code, I have also converted the
interrupt handlers to naked fns. This removes the proc-macro-reliant
"paste" dependency, but inserts a tiny ud2 at the end of every ISR.
Previously context::switch used compare_and_swap for acquiring the
global context switch lock, but given its deprecation in more recent
Rust versions, it has been replaced with compare_exchange_weak (which
can be further optimized on some architectures).
It also replaces panic!() with abort() in switch_finish_hook, because
unwinding from assembly is not that fun.
This also removes the need to do another semi-expensive remap when
cloning processes, since the KPCRs (for kernel TLS) are no longer stored
in the user PML4.