An open API service indexing awesome lists of open source software.

https://github.com/cirosantilli/linux-kernel-module-cheat

The perfect emulation setup to study and develop the Linux kernel v5.4.3, kernel modules, QEMU, gem5 and x86_64, ARMv7 and ARMv8 userland and baremetal assembly, ANSI C, C++ and POSIX. GDB step debug and KGDB just work. Powered by Buildroot and crosstool-NG. Highly automated. Thoroughly documented. Automated tests. "Tested" in an Ubuntu 24.04 host.
https://github.com/cirosantilli/linux-kernel-module-cheat

buildroot gdb kgdb linux-kernel linux-kernel-module qemu

Last synced: 3 months ago
JSON representation

The perfect emulation setup to study and develop the Linux kernel v5.4.3, kernel modules, QEMU, gem5 and x86_64, ARMv7 and ARMv8 userland and baremetal assembly, ANSI C, C++ and POSIX. GDB step debug and KGDB just work. Powered by Buildroot and crosstool-NG. Highly automated. Thoroughly documented. Automated tests. "Tested" in an Ubuntu 24.04 host.

Awesome Lists containing this project

README

        

= Linux Kernel Module Cheat
:cirosantilli-media-base: https://raw.githubusercontent.com/cirosantilli/media/master/
:description: The perfect emulation setup to study and develop the <> v5.9.2, kernel modules, <>, <> and x86_64, ARMv7 and ARMv8 <> and <> assembly, <>, <> and <>. <> and <> just work. Powered by <> and <>. Highly automated. Thoroughly documented. Automated <>. "Tested" in an Ubuntu 20.04 host.
:idprefix:
:idseparator: -
:nofooter:
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc-title:
:toc: macro
:toclevels: 6

https://zenodo.org/badge/latestdoi/64534859[image:https://zenodo.org/badge/64534859.svg[]]

{description}

https://twitter.com/dakami/status/1344853681749934080[Dan Kaminski-approved]™ https://en.wikipedia.org/wiki/Dan_Kaminsky[RIP].

TL;DR: xref:qemu-buildroot-setup-getting-started[xrefstyle=full] tested on Ubuntu 24.04:

....
git clone https://github.com/cirosantilli/linux-kernel-module-cheat
cd linux-kernel-module-cheat
sudo apt install docker
python3 -m venv .venv
. .venv/bin/activate
./setup
./run-docker create
./run-docker sh
....

This leaves you inside a Docker shell. Then inside Docker:

....
./build --download-dependencies qemu-buildroot
./run
....

and you are now in a Linux userland shell running on QEMU with everything built fully from source.

The source code for this page is located at: https://github.com/cirosantilli/linux-kernel-module-cheat[]. Due to https://github.com/isaacs/github/issues/1610[a GitHub limitation], this README is too long and not fully rendered on github.com, so either use:

* https://cirosantilli.com/linux-kernel-module-cheat
* https://cirosantilli.com/linux-kernel-module-cheat/index-split[]: split header version
* <>

https://github.com/cirosantilli/china-dictatorship | https://cirosantilli.com/china-dictatorship/xinjiang

image::https://raw.githubusercontent.com/cirosantilli/china-dictatorship-media/master/Xinjiang_prisoners_sitting_identified.jpeg[width=800]

toc::[]

== `--china`

The most important functionality of this repository is the `--china` option, sample usage:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./run --china > index.html
firefox index.html
....

see also: https://cirosantilli.com/china-dictatorship/mirrors

The secondary systems programming functionality is described on the sections below starting from <>.

image::https://raw.githubusercontent.com/cirosantilli/china-dictatorship-media/master/Tiananmen_cute_girls.jpg[width=800]

== Getting started

Each child section describes a possible different setup for this repo.

If you don't know which one to go for, start with <>.

Design goals of this project are documented at: xref:design-goals[xrefstyle=full].

=== Should you waste your life with systems programming?

Being the hardcore person who fully understands an important complex system such as a computer, it does have a nice ring to it doesn't it?

But before you dedicate your life to this nonsense, do consider the following points:

* almost all contributions to the kernel are done by large companies, and if you are not an employee in one of them, you are likely not going to be able to do much.
+
This can be inferred by the fact that the `devices/` directory is by far the largest in the kernel.
+
The kernel is of course just an interface to hardware, and the hardware developers start developing their kernel stuff even before specs are publicly released, both to help with hardware development and to have things working when the announcement is made.
+
Furthermore, I believe that there are in-tree devices which have never been properly publicly documented. Linus is of course fine with this, since code == documentation for him, but it is not as easy for mere mortals.
+
There are some less hardware bound higher level layers in the kernel which might not require being in a hardware company, and a few people must be living off it.
+
But of course, those are heavily motivated by the underlying hardware characteristics, and it is very likely that most of the people working there were previously at a hardware company.
+
In that sense, therefore, the kernel is not as open as one might want to believe.
+
Of course, if there is some https://stackoverflow.com/questions/1697842/do-graphic-cards-have-instruction-sets-of-their-own/1697883[super useful and undocumented hardware that is just waiting there to be reverse engineered], then that's a much juicier target :-)
* it is impossible to become rich with this knowledge.
+
This is partly implied by the fact that you need to be in a big company to make useful low level things, and therefore you will only be a tiny cog in the engine.
+
The key problem is that the entry cost of hardware design is just too insanely high for startups in general.
* Is learning this the most useful thing that you think can do for society?
+
Or are you just learning it for job security and having a nice sounding title?
+
I'm not a huge fan of the person, but I think Jobs said it right: https://www.youtube.com/watch?v=FF-tKLISfPE
+
First determine the useful goal, and then backtrack down to the most efficient thing you can do to reach it.
* there are two things that sadden me compared to physics-based engineering:
+
--
** you will never become eternally famous. All tech disappears sooner or later, while laws of nature, at least as useful approximations, stay unchanged.
** every problem that you face is caused by imperfections introduced by other humans.
+
It is much easier to accept limitations of physics, and even natural selection in biology, which are not produced by a sentient being (?).
--
+
Physics-based engineering, just like low level hardware, is of course completely closed source however, since wrestling against the laws of physics is about the most expensive thing humans can do, so there's also a downside to it.

Are you fine with those points, and ready to continue wasting your life with this crap?

Good. In that case, read on, and let's have some fun together ;-)

Related: <>.

=== QEMU Buildroot setup

==== QEMU Buildroot setup getting started

This setup has been tested on Ubuntu 20.04.

The Buildroot build is already broken on Ubuntu 21.04 onwards: https://github.com/cirosantilli/linux-kernel-module-cheat/issues/155[], so just do this from inside a 20.04 Docker instead as shown in the <> setup. We could fix the build on Ubuntu 21.04, but it will break again inevitably later on.

For other host operating systems see: xref:supported-hosts[xrefstyle=full].

Reserve 12Gb of disk and run:

....
git clone https://github.com/cirosantilli/linux-kernel-module-cheat
cd linux-kernel-module-cheat
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --download-dependencies qemu-buildroot
./run
....

You don't need to clone recursively even though we have `.git` submodules: `download-dependencies` fetches just the submodules that you need for this build to save time.

If something goes wrong, see: xref:common-build-issues[xrefstyle=full] and use our issue tracker: https://github.com/cirosantilli/linux-kernel-module-cheat/issues

The initial build will take a while (30 minutes to 2 hours) to clone and build, see <> for more details.

If you don't want to wait, you could also try the following faster but much more limited methods:

* <>
* <>

but you will soon find that they are simply not enough if you anywhere near serious about systems programming.

After `./run`, QEMU opens up leaving you in the <>, and you can start playing with the kernel modules inside the simulated system:

....
insmod hello.ko
insmod hello2.ko
rmmod hello
rmmod hello2
....

This should print to the screen:

....
hello init
hello2 init
hello cleanup
hello2 cleanup
....

which are `printk` messages from `init` and `cleanup` methods of those modules.

Sources:

* link:kernel_modules/hello.c[]
* link:kernel_modules/hello2.c[]

Quit QEMU with:

....
Ctrl-A X
....

See also: xref:quit-qemu-from-text-mode[xrefstyle=full].

All available modules can be found in the link:kernel_modules[] directory.

It is super easy to build for different <>, just use the `--arch` option:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --arch aarch64 --download-dependencies qemu-buildroot
./run --arch aarch64
....

To avoid typing `--arch aarch64` many times, you can set the default arch as explained at: xref:default-command-line-arguments[xrefstyle=full]

I now urge you to read the following sections which contain widely applicable information:

* <>
* <>
* <>
* Linux kernel
** <>
** <>

Once you use <> and <>, your terminal will look a bit like this:

....
[ 1.451857] input: AT Translated Set 2 keyboard as /devices/platform/i8042/s1│loading @0xffffffffc0000000: ../kernel_modules-1.0//timer.ko
[ 1.454310] ledtrig-cpu: registered to indicate activity on CPUs │(gdb) b lkmc_timer_callback
[ 1.455621] usbcore: registered new interface driver usbhid │Breakpoint 1 at 0xffffffffc0000000: file /home/ciro/bak/git/linux-kernel-module
[ 1.455811] usbhid: USB HID core driver │-cheat/out/x86_64/buildroot/build/kernel_modules-1.0/./timer.c, line 28.
[ 1.462044] NET: Registered protocol family 10 │(gdb) c
[ 1.467911] Segment Routing with IPv6 │Continuing.
[ 1.468407] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver │
[ 1.470859] NET: Registered protocol family 17 │Breakpoint 1, lkmc_timer_callback (data=0xffffffffc0002000 )
[ 1.472017] 9pnet: Installing 9P2000 support │ at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[ 1.475461] sched_clock: Marking stable (1473574872, 0)->(1554017593, -80442)│kernel_modules-1.0/./timer.c:28
[ 1.479419] ALSA device list: │28 {
[ 1.479567] No soundcards found. │(gdb) c
[ 1.619187] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 │Continuing.
[ 1.622954] ata2.00: configured for MWDMA2 │
[ 1.644048] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ P5│Breakpoint 1, lkmc_timer_callback (data=0xffffffffc0002000 )
[ 1.741966] tsc: Refined TSC clocksource calibration: 2904.010 MHz │ at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[ 1.742796] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29dc0f4s│kernel_modules-1.0/./timer.c:28
[ 1.743648] clocksource: Switched to clocksource tsc │28 {
[ 2.072945] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8043│(gdb) bt
[ 2.078641] EXT4-fs (vda): couldn't mount as ext3 due to feature incompatibis│#0 lkmc_timer_callback (data=0xffffffffc0002000 )
[ 2.080350] EXT4-fs (vda): mounting ext2 file system using the ext4 subsystem│ at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[ 2.088978] EXT4-fs (vda): mounted filesystem without journal. Opts: (null) │kernel_modules-1.0/./timer.c:28
[ 2.089872] VFS: Mounted root (ext2 filesystem) readonly on device 254:0. │#1 0xffffffff810ab494 in call_timer_fn (timer=0xffffffffc0002000 ,
[ 2.097168] devtmpfs: mounted │ fn=0xffffffffc0000000 ) at kernel/time/timer.c:1326
[ 2.126472] Freeing unused kernel memory: 1264K │#2 0xffffffff810ab71f in expire_timers (head=,
[ 2.126706] Write protecting the kernel read-only data: 16384k │ base=) at kernel/time/timer.c:1363
[ 2.129388] Freeing unused kernel memory: 2024K │#3 __run_timers (base=) at kernel/time/timer.c:1666
[ 2.139370] Freeing unused kernel memory: 1284K │#4 run_timer_softirq (h=) at kernel/time/timer.c:1692
[ 2.246231] EXT4-fs (vda): warning: mounting unchecked fs, running e2fsck isd│#5 0xffffffff81a000cc in __do_softirq () at kernel/softirq.c:285
[ 2.259574] EXT4-fs (vda): re-mounted. Opts: block_validity,barrier,user_xatr│#6 0xffffffff810577cc in invoke_softirq () at kernel/softirq.c:365
hello S98 │#7 irq_exit () at kernel/softirq.c:405
│#8 0xffffffff818021ba in exiting_irq () at ./arch/x86/include/asm/apic.h:541
Apr 15 23:59:23 login[49]: root login on 'console' │#9 smp_apic_timer_interrupt (regs=)
hello /root/.profile │ at arch/x86/kernel/apic/apic.c:1052
# insmod /timer.ko │#10 0xffffffff8180190f in apic_timer_interrupt ()
[ 6.791945] timer: loading out-of-tree module taints kernel. │ at arch/x86/entry/entry_64.S:857
# [ 7.821621] 4294894248 │#11 0xffffffff82003df8 in init_thread_union ()
[ 8.851385] 4294894504 │#12 0x0000000000000000 in ?? ()
│(gdb)
....

==== How to hack stuff

Besides a seamless <>, this project also aims to make it effortless to modify and rebuild several major components of the system, to serve as an awesome development setup.

===== Your first Linux kernel hack

Let's hack up the <>, which is an easy place to start.

Open the file:

....
vim submodules/linux/init/main.c
....

and find the `start_kernel` function, then add there a:

....
pr_info("I'VE HACKED THE LINUX KERNEL!!!");
....

Then rebuild the Linux kernel, quit QEMU and reboot the modified kernel:

....
./build-linux
./run
....

and, surely enough, your message has appeared at the beginning of the boot:

....
<6>[ 0.000000] I'VE HACKED THE LINUX KERNEL!!!
....

So you are now officially a Linux kernel hacker, way to go!

We could have used just link:build[] to rebuild the kernel as in the <> instead of link:build-linux[], but building just the required individual components is preferred during development:

* saves a few seconds from parsing Make scripts and reading timestamps
* makes it easier to understand what is being done in more detail
* allows passing more specific options to customize the build

The link:build[] script is just a lightweight wrapper that calls the smaller build scripts, and you can see what `./build` does with:

....
./build --dry-run
....

see also: <>.

When you reach difficulties, QEMU makes it possible to easily GDB step debug the Linux kernel source code, see: xref:gdb[xrefstyle=full].

===== Your first kernel module hack

Edit link:kernel_modules/hello.c[] to contain:

....
pr_info("hello init hacked\n");
....

and rebuild with:

....
./build-modules
....

Now there are two ways to test it out: the fast way, and the safe way.

The fast way is, without quitting or rebooting QEMU, just directly re-insert the module with:

....
insmod /mnt/9p/out_rootfs_overlay/lkmc/hello.ko
....

and the new `pr_info` message should now show on the terminal at the end of the boot.

This works because we have a <<9p>> mount there setup by default, which mounts the host directory that contains the build outputs on the guest:

....
ls "$(./getvar out_rootfs_overlay_dir)"
....

The fast method is slightly risky because your previously insmodded buggy kernel module attempt might have corrupted the kernel memory, which could affect future runs.

Such failures are however unlikely, and you should be fine if you don't see anything weird happening.

The safe way, is to fist <>, rebuild the modules, put them in the root filesystem, and then reboot:

....
./build-modules
./build-buildroot
./run --eval-after 'insmod hello.ko'
....

`./build-buildroot` is required after `./build-modules` because it re-generates the root filesystem with the modules that we compiled at `./build-modules`.

You can see that `./build` does that as well, by running:

....
./build --dry-run
....

See also: <>.

`--eval-after` is optional: you could just type `insmod hello.ko` in the terminal, but this makes it run automatically at the end of boot, and then drops you into a shell.

If the guest and host are the same arch, typically x86_64, you can speed up boot further with <>:

....
./run --kvm
....

All of this put together makes the safe procedure acceptably fast for regular development as well.

It is also easy to GDB step debug kernel modules with our setup, see: xref:gdb-step-debug-kernel-module[xrefstyle=full].

===== Your first glibc hack

We use <>, and it is tracked as an unmodified submodule at link:submodules/glibc[], at the exact same version that Buildroot has it, which can be found at: https://github.com/buildroot/buildroot/blob/2018.05/package/glibc/glibc.mk#L13[package/glibc/glibc.mk]. Buildroot 2018.05 applies no patches.

Let's hack up the `puts` function:

....
./build-buildroot -- glibc-reconfigure
....

with the patch:

....
diff --git a/libio/ioputs.c b/libio/ioputs.c
index 706b20b492..23185948f3 100644
--- a/libio/ioputs.c
+++ b/libio/ioputs.c
@@ -38,8 +38,9 @@ _IO_puts (const char *str)
if ((_IO_vtable_offset (_IO_stdout) != 0
|| _IO_fwide (_IO_stdout, -1) == -1)
&& _IO_sputn (_IO_stdout, str, len) == len
+ && _IO_sputn (_IO_stdout, " hacked", 7) == 7
&& _IO_putc_unlocked ('\n', _IO_stdout) != EOF)
- result = MIN (INT_MAX, len + 1);
+ result = MIN (INT_MAX, len + 1 + 7);

_IO_release_lock (_IO_stdout);
return result;
....

And then:

....
./run --eval-after './c/hello.out'
....

outputs:

....
hello hacked
....

Lol!

We can also test our hacked glibc on <> with:

....
./run --userland userland/c/hello.c
....

I just noticed that this is actually a good way to develop glibc for other archs.

In this example, we got away without recompiling the userland program because we made a change that did not affect the glibc ABI, see this answer for an introduction to ABI stability: https://stackoverflow.com/questions/2171177/what-is-an-application-binary-interface-abi/54967743#54967743

Note that for arch agnostic features that don't rely on bleeding kernel changes that you host doesn't yet have, you can develop glibc natively as explained at:

* https://stackoverflow.com/questions/10412684/how-to-compile-my-own-glibc-c-standard-library-from-source-and-use-it/52454710#52454710
* https://stackoverflow.com/questions/847179/multiple-glibc-libraries-on-a-single-host/52454603#52454603
* https://stackoverflow.com/questions/2856438/how-can-i-link-to-a-specific-glibc-version/52550158#52550158 more focus on symbol versioning, but no one knows how to do it, so I answered

Tested on a30ed0f047523ff2368d421ee2cce0800682c44e + 1.

===== Your first Binutils hack

Have you ever felt that a single `inc` instruction was not enough? Really? Me too!

So let's hack the <>, which is part of https://en.wikipedia.org/wiki/GNU_Binutils[GNU Binutils], to add a new shiny version of `inc` called... `myinc`!

GCC uses GNU GAS as its backend, so we will test out new mnemonic with an <> test program: link:userland/arch/x86_64/binutils_hack.c[], which is just a copy of link:userland/arch/x86_64/binutils_nohack.c[] but with `myinc` instead of `inc`.

The inline assembly is disabled with an `#ifdef`, so first modify the source to enable that.

Then, try to build userland:

....
./build-userland
....

and watch it fail with:

....
binutils_hack.c:8: Error: no such instruction: `myinc %rax'
....

Now, edit the file

....
vim submodules/binutils-gdb/opcodes/i386-tbl.h
....

and add a copy of the `"inc"` instruction just next to it, but with the new name `"myinc"`:

....
diff --git a/opcodes/i386-tbl.h b/opcodes/i386-tbl.h
index af583ce578..3cc341f303 100644
--- a/opcodes/i386-tbl.h
+++ b/opcodes/i386-tbl.h
@@ -1502,6 +1502,19 @@ const insn_template i386_optab[] =
{ { { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0,
1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 } } } },
+ { "myinc", 1, 0xfe, 0x0, 1,
+ { { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } },
+ { 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0 },
+ { { { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0,
+ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 } } } },
{ "sub", 2, 0x28, None, 1,
{ { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
....

Finally, rebuild Binutils, userland and test our program with <>:

....
./build-buildroot -- host-binutils-rebuild
./build-userland --static
./run --static --userland userland/arch/x86_64/binutils_hack.c
....

and we se that `myinc` worked since the assert did not fail!

Tested on b60784d59bee993bf0de5cde6c6380dd69420dda + 1.

===== Your first GCC hack

OK, now time to hack GCC.

For convenience, let's use the <>.

If we run the program link:userland/c/gcc_hack.c[]:

....
./build-userland --static
./run --static --userland userland/c/gcc_hack.c
....

it produces the normal boring output:

....
i = 2
j = 0
....

So how about we swap `++` and `--` to make things more fun?

Open the file:

....
vim submodules/gcc/gcc/c/c-parser.c
....

and find the function `c_parser_postfix_expression_after_primary`.

In that function, swap `case CPP_PLUS_PLUS` and `case CPP_MINUS_MINUS`:

....
diff --git a/gcc/c/c-parser.c b/gcc/c/c-parser.c
index 101afb8e35f..89535d1759a 100644
--- a/gcc/c/c-parser.c
+++ b/gcc/c/c-parser.c
@@ -8529,7 +8529,7 @@ c_parser_postfix_expression_after_primary (c_parser *parser,
expr.original_type = DECL_BIT_FIELD_TYPE (field);
}
break;
- case CPP_PLUS_PLUS:
+ case CPP_MINUS_MINUS:
/* Postincrement. */
start = expr.get_start ();
finish = c_parser_peek_token (parser)->get_finish ();
@@ -8548,7 +8548,7 @@ c_parser_postfix_expression_after_primary (c_parser *parser,
expr.original_code = ERROR_MARK;
expr.original_type = NULL;
break;
- case CPP_MINUS_MINUS:
+ case CPP_PLUS_PLUS:
/* Postdecrement. */
start = expr.get_start ();
finish = c_parser_peek_token (parser)->get_finish ();
....

Now rebuild GCC, the program and re-run it:

....
./build-buildroot -- host-gcc-final-rebuild
./build-userland --static
./run --static --userland userland/c/gcc_hack.c
....

and the new ouptut is now:

....
i = 2
j = 0
....

We need to use the ugly `-final` thing because GCC has to packages in Buildroot, `-initial` and `-final`: https://stackoverflow.com/questions/54992977/how-to-select-an-override-srcdir-source-for-gcc-when-building-buildroot No one is able to example precisely with a minimal example why this is required:

* https://stackoverflow.com/questions/39883865/why-multiple-passes-for-building-linux-from-scratch-lfs
* https://stackoverflow.com/questions/27457835/why-do-cross-compilers-have-a-two-stage-compilation

==== About the QEMU Buildroot setup

What QEMU and Buildroot are:

* <>
* <>

This is our reference setup, and the best supported one, use it unless you have good reason not to.

It was historically the first one we did, and all sections have been tested with this setup unless explicitly noted.

Read the following sections for further introductory material:

* <>
* <>

[[dry-run]]
=== Dry run to get commands for your project

One of the major features of this repository is that we try to support the `--dry-run` option really well for all scripts.

This option, as the name suggests, outputs the external commands that would be run (or more precisely: equivalent commands), without actually running them.

This allows you to just clone this repository and get full working commands to integrate into your project, without having to build or use this setup further!

For example, we can obtain a QEMU run for the file link:userland/c/hello.c[] in <> by adding `--dry-run` to the normal command:

....
./run --dry-run --userland userland/c/hello.c
....

which as of LKMC a18f28e263c91362519ef550150b5c9d75fa3679 + 1 outputs:

....
+ /path/to/linux-kernel-module-cheat/out/qemu/default/opt/x86_64-linux-user/qemu-x86_64 \
-L /path/to/linux-kernel-module-cheat/out/buildroot/build/default/x86_64/target \
-r 5.2.1 \
-seed 0 \
-trace enable=load_file,file=/path/to/linux-kernel-module-cheat/out/run/qemu/x86_64/0/trace.bin \
-cpu max \
/path/to/linux-kernel-module-cheat/out/userland/default/x86_64/c/hello.out \
;
....

So observe that the command contains:

* `+`: sign to differentiate it from program stdout, much like bash `-x` output. This is not a valid part of the generated Bash command however.
* the actual command nicely, indented and with arguments broken one per line, but with continuing backslashes so you can just copy paste into a terminal
+
For setups that don't support the newline e.g. <>, you can turn them off with `--print-cmd-oneline`
* `;`: both a valid part of the Bash command, and a visual mark the end of the command

For the specific case of running emulators such as QEMU, the last command is also automatically placed in a file for your convenience and later inspection:

....
cat "$(./getvar run_dir)/run.sh"
....

Since we need this so often, the last run command is also stored for convenience at:

....
cat out/run.sh
....

although this won't of course work well for <>.

Furthermore, `--dry-run` also automatically specifies, in valid Bash shell syntax:

* environment variables used to run the command with syntax `+ ENV_VAR_1=abc ENV_VAR_2=def ./some/command`
* change in working directory with `+ cd /some/new/path && ./some/command`

=== gem5 Buildroot setup

==== About the gem5 Buildroot setup

This setup is like the <>, but it uses http://gem5.org/[gem5] instead of QEMU as a system simulator.

QEMU tries to run as fast as possible and give correct results at the end, but it does not tell us how many CPU cycles it takes to do something, just the number of instructions it ran. This kind of simulation is known as functional simulation.

The number of instructions executed is a very poor estimator of performance because in modern computers, a lot of time is spent waiting for memory requests rather than the instructions themselves.

gem5 on the other hand, can simulate the system in more detail than QEMU, including:

* simplified CPU pipeline
* caches
* DRAM timing

and can therefore be used to estimate system performance, see: xref:gem5-run-benchmark[xrefstyle=full] for an example.

The downside of gem5 much slower than QEMU because of the greater simulation detail.

See <> for a more thorough comparison.

==== gem5 Buildroot setup getting started

For the most part, if you just add the `--emulator gem5` option or `*-gem5` suffix to all commands and everything should magically work.

If you haven't built Buildroot yet for <>, you can build from the beginning with:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --download-dependencies gem5-buildroot
./run --emulator gem5
....

If you have already built previously, don't be afraid: gem5 and QEMU use almost the same root filesystem and kernel, so `./build` will be fast.

Remember that the gem5 boot is <> than QEMU since the simulation is more detailed.

If you have a relatively new GCC version and the gem5 build fails on your machine, see: <>.

To get a terminal, either open a new shell and run:

....
./gem5-shell
....

You can quit the shell without killing gem5 by typing tilde followed by a period:

....
~.
....

If you are inside <>, which I highly recommend, you can both run gem5 stdout and open the guest terminal on a split window with:

....
./run --emulator gem5 --tmux
....

See also: xref:tmux-gem5[xrefstyle=full].

At the end of boot, it might not be very clear that you have the shell since some <> messages may appear in front of the prompt like this:

....
# <6>[ 1.215329] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd486fa865, max_idle_ns: 440795259574 ns
<6>[ 1.215351] clocksource: Switched to clocksource tsc
....

but if you look closely, the `PS1` prompt marker `#` is there already, just hit enter and a clear prompt line will appear.

If you forgot to open the shell and gem5 exit, you can inspect the terminal output post-mortem at:

....
less "$(./getvar --emulator gem5 m5out_dir)/system.pc.com_1.device"
....

More gem5 information is present at: xref:gem5[xrefstyle=full]

Good next steps are:

* <>: how to run a benchmark in gem5 full system, including how to boot Linux, checkpoint and restore to skip the boot on a fast CPU
* <>: understand the output files that gem5 produces, which contain information about your run
* <>: magic guest instructions used to control gem5
* <>: how to add your own files to the image if you have a benchmark that we don't already support out of the box (also send a pull request!)

[[docker]]
=== Docker host setup

This repository has been tested inside clean https://en.wikipedia.org/wiki/Docker_(software)[Docker] containers.

This is a good option if you are on a Linux host, but the native setup failed due to your weird host distribution, and you have better things to do with your life than to debug it. See also: xref:supported-hosts[xrefstyle=full].

For example, to do a <> inside Docker, run:

....
sudo apt-get install docker
python3 -m venv .venv
. .venv/bin/activate
./setup
./run-docker create && \
./run-docker sh -- ./build --download-dependencies qemu-buildroot
./run-docker
....

You are now left inside a shell in the Docker! From there, just run as usual:

....
./run
....

The host git top level directory is mounted inside the guest with a https://stackoverflow.com/questions/23439126/how-to-mount-a-host-directory-in-a-docker-container[Docker volume], which means for example that you can use your host's GUI text editor directly on the files. Just don't forget that if you nuke that directory on the guest, then it gets nuked on the host as well!

Command breakdown:

* `./run-docker create`: create the image and container.
+
Needed only the very first time you use Docker, or if you run `./run-docker DESTROY` to restart for scratch, or save some disk space.
+
The image and container name is `lkmc`. The container shows under:
+
....
docker ps -a
....
+
and the image shows under:
+
....
docker images
....
* `./run-docker`: open a shell on the container.
+
If it has not been started previously, start it. This can also be done explicitly with:
+
....
./run-docker start
....
+
Quit the shell as usual with `Ctrl-D`
+
This can be called multiple times from different host terminals to open multiple shells.
* `./run-docker stop`: stop the container.
+
This might save a bit of CPU and RAM once you stop working on this project, but it should not be a lot.
* `./run-docker DESTROY`: delete the container and image.
+
This doesn't really clean the build, since we mount the guest's working directory on the host git top-level, so you basically just got rid of the `apt-get` installs.
+
To actually delete the Docker build, run on host:
+
....
# sudo rm -rf out.docker
....

To use <> from inside Docker, you need a second shell inside the container. You can either do that from another shell with:

....
./run-docker
....

or even better, by starting a <> session inside the container. We install `tmux` by default in the container.

You can also start a second shell and run a command in it at the same time with:

....
./run-docker sh -- ./run-gdb start_kernel
....

To use <> from Docker, run:

....
./run --graphic --vnc
....

and then on host:

....
sudo apt-get install vinagre
./vnc
....

TODO make files created inside Docker be owned by the current user in host instead of `root`:

* https://stackoverflow.com/questions/33681396/how-do-i-write-to-a-volume-container-as-non-root-in-docker
* https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes
* https://stackoverflow.com/questions/31779802/shared-volume-file-permissions-ownership-docker

[[prebuilt]]
=== Prebuilt setup

==== About the prebuilt setup

This setup uses prebuilt binaries that we upload to GitHub from time to time.

We don't currently provide a full prebuilt because it would be too big to host freely, notably because of the cross toolchain.

Our prebuilts currently include:

* <> binaries
** Linux kernel
** root filesystem
* <> binaries for QEMU

For more details, see our our <>.

Advantage of this setup: saves time and disk space on the initial install, which is expensive in largely due to building the toolchain.

The limitations are severe however:

* can't <>, since the source and cross toolchain with GDB are not available. Buildroot cannot easily use a host toolchain: xref:prebuilt-toolchain[xrefstyle=full].
+
Maybe we could work around this by just downloading the kernel source somehow, and using a host prebuilt GDB, but we felt that it would be too messy and unreliable.
* you won't get the latest version of this repository. Our <> attempt to automate builds failed, and storing a release for every commit would likely make GitHub mad at us anyway.
* <> is not currently supported. The major blocking point is how to avoid distributing the kernel images twice: once for gem5 which uses `vmlinux`, and once for QEMU which uses `arch/*` images, see also:
** https://github.com/cirosantilli/linux-kernel-module-cheat/issues/79
** <>.

This setup might be good enough for those developing simulators, as that requires less image modification. But once again, if you are serious about this, why not just let your computer build the <> while you take a coffee or a nap? :-)

==== Prebuilt setup getting started

Checkout to the latest tag and use the Ubuntu packaged QEMU to boot Linux:

....
sudo apt-get install qemu-system-x86
git clone https://github.com/cirosantilli/linux-kernel-module-cheat
cd linux-kernel-module-cheat
git checkout "$(git rev-list --tags --max-count=1)"
./release-download-latest
unzip lkmc-*.zip
./run --qemu-which host
....

You have to checkout to the latest tag to ensure that the scripts match the release format: https://stackoverflow.com/questions/1404796/how-to-get-the-latest-tag-name-in-current-branch-in-git

This is known not to work for aarch64 on an Ubuntu 16.04 host with QEMU 2.5.0, presumably because QEMU is too old, the terminal does not show any output. I haven't investigated why.

Or to run a baremetal example instead:

....
./run \
--arch aarch64 \
--baremetal userland/c/hello.c \
--qemu-which host \
;
....

Be saner and use our custom built QEMU instead:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --download-dependencies qemu
./run
....

To build the kernel modules as in <> do:

....
git submodule update --depth 1 --init --recursive "$(./getvar linux_source_dir)"
./build-linux --no-modules-install -- modules_prepare
./build-modules --gcc-which host
./run
....

TODO: for now the only way to test those modules out without <> is with 9p, since we currently rely on Buildroot to manipulate the root filesystem.

Command explanation:

* `modules_prepare` does the minimal build procedure required on the kernel for us to be able to compile the kernel modules, and is way faster than doing a full kernel build. A full kernel build would also work however.
* `--gcc-which host` selects your host Ubuntu packaged GCC, since you don't have the Buildroot toolchain
* `--no-modules-install` is required otherwise the `make modules_install` target we run by default fails, since the kernel wasn't built

To modify the Linux kernel, build and use it as usual:

....
git submodule update --depth 1 --init --recursive "$(./getvar linux_source_dir)"
./build-linux
./run
....

////
For gem5, do:

....
git submodule update --init --depth 1 "$(./getvar linux_source_dir)"
sudo apt-get install qemu-utils
./build-gem5
./run --emulator gem5 --qemu-which host
....

`qemu-utils` is required because we currently distribute `.qcow2` files which <>, so we need `qemu-img` to extract them first.

The Linux kernel is required for `extract-vmlinux` to convert the compressed kernel image which QEMU understands into the raw vmlinux that gem5 understands: https://superuser.com/questions/298826/how-do-i-uncompress-vmlinuz-to-vmlinux
////

////
[[ubuntu]]
=== Ubuntu guest setup

==== About the Ubuntu guest setup

This setup is similar to <>, but instead of using Buildroot for the root filesystem, it downloads an Ubuntu image with Docker, and uses that as the root filesystem.

The rationale for choice of Ubuntu as a second distribution in addition to Buildroot can be found at: xref:linux-distro-choice[xrefstyle=full]

Advantages over Buildroot:

* saves build time
* you get to play with a huge selection of Debian packages out of the box
* more representative of most non-embedded production systems than BusyBox

Disadvantages:

* less visibility: https://askubuntu.com/questions/82302/how-to-compile-ubuntu-from-source-code The fact that that question has no answer makes me cringe
* less compatibility, e.g. no one knows what the officially supported cross compilers are: https://askubuntu.com/questions/1046294/what-are-the-officially-supported-cross-compilers-for-ubuntu-server-alternative

Docker is used here just as an image download provider since it has a wide variety of images. Why we don't just download the regular Ubuntu disk image:

* that image is not ready to boot, but rather goes into an interactive installer: https://askubuntu.com/questions/884534/how-to-run-ubuntu-16-04-desktop-on-qemu/1046792#1046792
* the default Ubuntu image has a large collection of software, and is large. The docker version is much more minimal.

One alternative would be to use https://wiki.ubuntu.com/Base[Ubuntu base] which can be downloaded from: http://cdimage.ubuntu.com/ubuntu-base That provides a `.tgz` and comes very close to what we obtain with Docker, but without the need for `sudo`.

==== Ubuntu guest setup getting started

TODO

....
sudo ./build-docker
./run --docker
....

`sudo` is required for Docker operations: https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo
////

[[host]]
=== Host kernel module setup

**THIS IS DANGEROUS (AND FUN), YOU HAVE BEEN WARNED**

This method runs the kernel modules directly on your host computer without a VM, and saves you the compilation time and disk usage of the virtual machine method.

It has however severe limitations:

* can't control which kernel version and build options to use. So some of the modules will likely not compile because of kernel API changes, since https://stackoverflow.com/questions/37098482/how-to-build-a-linux-kernel-module-so-that-it-is-compatible-with-all-kernel-rele/45429681#45429681[the Linux kernel does not have a stable kernel module API].
* bugs can easily break you system. E.g.:
** segfaults can trivially lead to a kernel crash, and require a reboot
** your disk could get erased. Yes, this can also happen with `sudo` from userland. But you should not use `sudo` when developing newbie programs. And for the kernel you don't have the choice not to use `sudo`.
** even more subtle system corruption such as https://unix.stackexchange.com/questions/78858/cannot-remove-or-reinsert-kernel-module-after-error-while-inserting-it-without-r[not being able to rmmod]
* can't control which hardware is used, notably the CPU architecture
* can't step debug it with <> easily. The alternatives are https://en.wikipedia.org/wiki/JTAG[JTAG] or <>, but those are less reliable, and require extra hardware.

Still interested?

....
./build-modules --host
....

Compilation will likely fail for some modules because of kernel or toolchain differences that we can't control on the host.

The best workaround is to compile just your modules with:

....
./build-modules --host -- hello hello2
....

which is equivalent to:

....
./build-modules \
--gcc-which host \
--host \
-- \
kernel_modules/hello.c \
kernel_modules/hello2.c \
;
....

Or just remove the `.c` extension from the failing files and try again:

....
cd "$(./getvar kernel_modules_source_dir)"
mv broken.c broken.c~
....

Once you manage to compile, and have come to terms with the fact that this may blow up your host, try it out with:

....
cd "$(./getvar kernel_modules_build_host_subdir)"
sudo insmod hello.ko

# Our module is there.
sudo lsmod | grep hello

# Last message should be: hello init
dmesg -T

sudo rmmod hello

# Last message should be: hello exit
dmesg -T

# Not present anymore
sudo lsmod | grep hello
....

==== Hello host

Minimal host build system example:

....
cd hello_host_kernel_module
make
sudo insmod hello.ko
dmesg
sudo rmmod hello.ko
dmesg
....

=== Userland setup

==== About the userland setup

In order to test the kernel and emulators, userland content in the form of executables and scripts is of course required, and we store it mostly under:

* link:userland/[]
* <>
* <>

When we started this repository, it only contained content that interacted very closely with the kernel, or that had required performance analysis.

However, we soon started to notice that this had an increasing overlap with other userland test repositories: we were duplicating build and test infrastructure and even some examples.

Therefore, we decided to consolidate other userland tutorials that we had scattered around into this repository.

Notable userland content included / moving into this repository includes:

* <>
* <>
* <>
* <>
* <>

==== Userland setup getting started

There are several ways to run our <>, notably:

* natively on the host as shown at: xref:userland-setup-getting-started-natively[xrefstyle=full]
+
Can only run examples compatible with your host CPU architecture and OS, but has the fastest setup and runtimes.
* from user mode simulation with:
+
--
** the host prebuilt toolchain: xref:userland-setup-getting-started-with-prebuilt-toolchain-and-qemu-user-mode[xrefstyle=full]
** the Buildroot toolchain you built yourself: xref:qemu-user-mode-getting-started[xrefstyle=full]
--
+
This setup:
+
--
** can run most examples, including those for other CPU architectures, with the notable exception of examples that rely on kernel modules
** can run reproducible approximate performance experiments with gem5, see e.g. <>
--
* from full system simulation as shown at: xref:qemu-buildroot-setup-getting-started[xrefstyle=full].
+
This is the most reproducible and controlled environment, and all examples work there. But also the slower one to setup.

===== Userland setup getting started natively

With this setup, we will use the host toolchain and execute executables directly on the host.

No toolchain build is required, so you can just download your distro toolchain and jump straight into it.

Build, run and example, and clean it in-tree with:

....
sudo apt-get install gcc
cd userland
./build c/hello
./c/hello.out
./build --clean
....

Source: link:userland/c/hello.c[].

Build an entire directory and test it:

....
cd userland
./build c
./test c
....

Build the current directory and test it:

....
cd userland/c
./build
./test
....

As mentioned at <>, tests under link:userland/libs[] require certain optional libraries to be installed, and are not built or tested by default.

You can install those libraries with:

....
cd linux-kernel-module-cheat
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --download-dependencies userland-host
....

and then build the examples and test with:

....
./build --package-all
./test --package-all
....

Pass custom compiler options:

....
./build --ccflags='-foptimize-sibling-calls -foptimize-strlen' --force-rebuild
....

Here we used `--force-rebuild` to force rebuild since the sources weren't modified since the last build.

Some CLI options have more specialized flags, e.g. `-O` for the <>:

....
./build --optimization-level 3 --force-rebuild
....

See also <> for `--static`.

The `build` scripts inside link:userland/[] are just symlinks to link:build-userland-in-tree[] which you can also use from toplevel as:

....
./build-userland-in-tree
./build-userland-in-tree userland/c
./build-userland-in-tree userland/c/hello.c
....

`build-userland-in-tree` is in turn just a thin wrapper around link:build-userland[]:

....
./build-userland --gcc-which host --in-tree userland/c
....

So you can use any option supported by `build-userland` script freely with `build-userland-in-tree` and `build`.

The situation is analogous for link:userland/test[], link:test-executables-in-tree[] and link:test-executables[], which are further documented at: xref:user-mode-tests[xrefstyle=full].

Do a more clean out-of-tree build instead and run the program:

....
./build-userland --gcc-which host --userland-build-id host
./run --emulator native --userland userland/c/hello.c --userland-build-id host
....

Here we:

* put the host executables in a separate <> to avoid conflict with Buildroot builds.
* ran with the `--emulator native` option to run the program natively

In this case you can debub the program with:

....
./run --debug-vm --emulator native --userland userland/c/hello.c --userland-build-id host
....

as shown at: xref:debug-the-emulator[xrefstyle=full], although direct GDB host usage works as well of course.

===== Userland setup getting started with prebuilt toolchain and QEMU user mode

If you are lazy to built the Buildroot toolchain and QEMU, but want to run e.g. ARM <> in <>, you can get away on Ubuntu 18.04 with just:

....
sudo apt-get install gcc-aarch64-linux-gnu qemu-system-aarch64
./build-userland \
--arch aarch64 \
--gcc-which host \
--userland-build-id host \
;
./run \
--arch aarch64 \
--qemu-which host \
--userland-build-id host \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
....

where:

* `--gcc-which host`: use the host toolchain.
+
We must pass this to `./run` as well because QEMU must know which dynamic libraries to use. See also: xref:user-mode-static-executables[xrefstyle=full].
* `--userland-build-id host`: put the host built into a <>

This present the usual trade-offs of using prebuilts as mentioned at: xref:prebuilt[xrefstyle=full].

Other functionality are analogous, e.g. testing:

....
./test-executables \
--arch aarch64 \
--gcc-which host \
--qemu-which host \
--userland-build-id host \
;
....

and <>:

....
./run \
--arch aarch64 \
--gdb \
--gcc-which host \
--qemu-which host \
--userland-build-id host \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
....

===== Userland setup getting started full system

First ensure that <> is working.

After doing that setup, you can already execute your userland programs from inside QEMU: the only missing step is how to rebuild executables and run them.

And the answer is exactly analogous to what is shown at: xref:your-first-kernel-module-hack[xrefstyle=full]

For example, if we modify link:userland/c/hello.c[] to print out something different, we can just rebuild it with:

....
./build-userland
....

Source: link:build-userland[]. `./build` calls that script automatically for us when doing the initial full build.

Now, run the program either without rebooting use the <<9p>> mount:

....
/mnt/9p/out_rootfs_overlay/c/hello.out
....

or shutdown QEMU, add the executable to the root filesystem:

....
./build-buildroot
....

reboot and use the root filesystem as usual:

....
./hello.out
....

=== Baremetal setup

==== About the baremetal setup

This setup does not use the Linux kernel nor Buildroot at all: it just runs your very own minimal OS.

`x86_64` is not currently supported, only `arm` and `aarch64`: I had made some x86 bare metal examples at: https://github.com/cirosantilli/x86-bare-metal-examples but I'm lazy to port them here now. Pull requests are welcome.

The main reason this setup is included in this project, despite the word "Linux" being on the project name, is that a lot of the emulator boilerplate can be reused for both use cases.

This setup allows you to make a tiny OS and that runs just a few instructions, use it to fully control the CPU to better understand the simulators for example, or develop your own OS if you are into that.

You can also use C and a subset of the C standard library because we enable https://en.wikipedia.org/wiki/Newlib[Newlib] by default. See also:

* https://electronics.stackexchange.com/questions/223929/c-standard-libraries-on-bare-metal/400077#400077
* https://stackoverflow.com/questions/13063055/does-a-libc-os-exist/59771531#59771531

Our C bare-metal compiler is built with https://github.com/crosstool-ng/crosstool-ng[crosstool-NG]. If you have already built <> previously, you will end up with two GCCs installed. Unfortunately I don't see a solution for this, since we need separate toolchains for Newlib on baremetal and glibc on Linux: https://stackoverflow.com/questions/38956680/difference-between-arm-none-eabi-and-arm-linux-gnueabi/38989869#38989869

==== Baremetal setup getting started

Every `.c` file inside link:baremetal/[] and `.S` file inside `baremetal/arch//` generates a separate baremetal image.

For example, to run link:baremetal/arch/aarch64/dump_regs.c[] in QEMU do:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --arch aarch64 --download-dependencies qemu-baremetal
./run --arch aarch64 --baremetal baremetal/arch/aarch64/dump_regs.c
....

And the terminal prints the values of certain system registers. This example prints registers that are only accessible from <> or higher, and thus could not be run in userland.

In addition to the examples under link:baremetal/[], several of the <> can also be run in baremetal! This is largely due to the <>.

The examples that work include most <> that don't rely on complicated syscalls such as threads, and almost all the <> examples.

The exact list of userland programs that work in baremetal is specified in <> with the `baremetal` property, but you can also easily find it out with a <>:

....
./test-executables --arch aarch64 --dry-run --mode baremetal
....

For example, we can run the C hello world link:userland/c/hello.c[] simply as:

....
./run --arch aarch64 --baremetal userland/c/hello.c
....

and that outputs to the serial port the string:

....
hello
....

which QEMU shows on the host terminal.

To modify a baremetal program, simply edit the file, e.g.

....
vim userland/c/hello.c
....

and rebuild:

....
./build-baremetal --arch aarch64
./run --arch aarch64 --baremetal userland/c/hello.c
....

`./build qemu-baremetal` that we run previously is only needed for the initial build. That script calls link:build-baremetal[] for us, in addition to building prerequisites such as QEMU and crosstool-NG.

`./build-baremetal` uses crosstool-NG, and so it must be preceded by link:build-crosstool-ng[], which `./build qemu-baremetal` also calls.

Now let's run link:userland/arch/aarch64/add.S[]:

....
./run --arch aarch64 --baremetal userland/arch/aarch64/add.S
....

This time, the terminal does not print anything, which indicates success: if you look into the source, you will see that we just have an assertion there.

You can see a sample assertion fail in link:userland/c/assert_fail.c[]:

....
./run --arch aarch64 --baremetal userland/c/assert_fail.c
....

and the terminal contains:

....
lkmc_exit_status_134
error: simulation error detected by parsing logs
....

and the exit status of our script is 1:

....
echo $?
....

You can run all the baremetal examples in one go and check that all assertions passed with:

....
./test-executables --arch aarch64 --mode baremetal
....

To use gem5 instead of QEMU do:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --download-dependencies gem5-baremetal
./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5
....

and then <> open a shell with:

....
./gem5-shell
....

Or as usual, <> users can do both in one go with:

....
./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --tmux
....

TODO: the carriage returns are a bit different than in QEMU, see: xref:gem5-baremetal-carriage-return[xrefstyle=full].

Note that `./build-baremetal` requires the `--emulator gem5` option, and generates separate executable images for both, as can be seen from:

....
echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator qemu image)"
echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 image)"
....

This is unlike the Linux kernel that has a single image for both QEMU and gem5:

....
echo "$(./getvar --arch aarch64 --emulator qemu image)"
echo "$(./getvar --arch aarch64 --emulator gem5 image)"
....

The reason for that is that on baremetal we don't parse the <> from memory like the Linux kernel does, which tells the kernel for example the UART address, and many other system parameters.

`gem5` also supports the `RealViewPBX` machine, which represents an older hardware compared to the default `VExpress_GEM5_V1`:

....
./build-baremetal --arch aarch64 --emulator gem5 --machine RealViewPBX
./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine RealViewPBX
....

see also: xref:gem5-arm-platforms[xrefstyle=full].

This generates yet new separate images with new magic constants:

....
echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine VExpress_GEM5_V1 image)"
echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine RealViewPBX image)"
....

But just stick to newer and better `VExpress_GEM5_V1` unless you have a good reason to use `RealViewPBX`.

When doing baremetal programming, it is likely that you will want to learn userland assembly first, see: xref:userland-assembly[xrefstyle=full].

For more information on baremetal, see the section: xref:baremetal[xrefstyle=full].

The following subjects are particularly important:

* <>
* <>

=== Build the documentation

You don't need to depend on GitHub.

For a quick and dirty build, install https://asciidoctor.org/[Asciidoctor] however you like and build:

....
asciidoctor README.adoc
xdg-open README.html
....

For development, you will want to do a more controlled build with extra error checking as follows.

For the initial build do:

....
python3 -m venv .venv
. .venv/bin/activate
./setup
./build --download-dependencies docs
....

which also downloads build dependencies.

Then the following times just to the faster:

....
./build-doc
....

Source: link:build-doc[]

The HTML output is located at:

....
xdg-open out/README.html
....

More information about our documentation internals can be found at: xref:documentation[xrefstyle=full]

[[gdb]]
== GDB step debug

=== GDB step debug kernel boot

`--gdb-wait` makes QEMU and gem5 wait for a GDB connection, otherwise we could accidentally go past the point we want to break at:

....
./run --gdb-wait
....

Say you want to break at `start_kernel`. So on another shell:

....
./run-gdb start_kernel
....

or at a given line:

....
./run-gdb init/main.c:1088
....

Now QEMU will stop there, and you can use the normal GDB commands:

....
list
next
continue
....

See also:

* https://stackoverflow.com/questions/11408041/how-to-debug-the-linux-kernel-with-gdb-and-qemu/33203642#33203642
* https://stackoverflow.com/questions/4943857/linux-kernel-live-debugging-how-its-done-and-what-tools-are-used/42316607#42316607

==== GDB step debug kernel boot other archs

Just don't forget to pass `--arch` to `./run-gdb`, e.g.:

....
./run --arch aarch64 --gdb-wait
....

and:

....
./run-gdb --arch aarch64 start_kernel
....

[[kernel-o0]]
==== Disable kernel compiler optimizations

https://stackoverflow.com/questions/29151235/how-to-de-optimize-the-linux-kernel-to-and-compile-it-with-o0

`O=0` is an impossible dream, `O=2` being the default.

So get ready for some weird jumps, and `` fun. Why, Linux, why.

The `-O` level of some other userland content can be controlled as explained at: <>.

=== GDB step debug kernel post-boot

Let's observe the kernel `write` system call as it reacts to some userland actions.

Start QEMU with just:

....
./run
....

and after boot inside a shell run:

....
./count.sh
....

which counts to infinity to stdout. Source: link:rootfs_overlay/lkmc/count.sh[].

Then in another shell, run:

....
./run-gdb
....

and then hit:

....
Ctrl-C
break __x64_sys_write
continue
continue
continue
....

And you now control the counting on the first shell from GDB!

Before v4.17, the symbol name was just `sys_write`, the change happened at https://github.com/torvalds/linux/commit/d5a00528b58cdb2c71206e18bd021e34c4eab878[d5a00528b58cdb2c71206e18bd021e34c4eab878]. As of Linux v 4.19, the function is called `sys_write` in `arm`, and `__arm64_sys_write` in `aarch64`. One good way to find it if the name changes again is to try:

....
rbreak .*sys_write
....

or just have a quick look at the sources!

When you hit `Ctrl-C`, if we happen to be inside kernel code at that point, which is very likely if there are no heavy background tasks waiting, and we are just waiting on a `sleep` type system call of the command prompt, we can already see the source for the random place inside the kernel where we stopped.

=== tmux

tmux just makes things even more fun by allowing us to see both the terminal for:

* emulator stdout
* <>

at once without dragging windows around!

First start `tmux` with:

....
tmux
....

Now that you are inside a shell inside tmux, you can start GDB simply with:

....
./run --gdb
....

which is just a convenient shortcut for:

....
./run --gdb-wait --tmux --tmux-args start_kernel
....

This splits the terminal into two panes:

* left: usual QEMU with terminal
* right: GDB

and focuses on the GDB pane.

Now you can navigate with the usual tmux shortcuts:

* switch between the two panes with: `Ctrl-B O`
* close either pane by killing its terminal with `Ctrl-D` as usual

See the tmux manual for further details:

....
man tmux
....

To start again, switch back to the QEMU pane with `Ctrl-O`, kill the emulator, and re-run:

....
./run --gdb
....

This automatically clears the GDB pane, and starts a new one.

The option `--tmux-args` determines which options will be passed to the program running on the second tmux pane, and is equivalent to:

This is equivalent to:

....
./run --gdb-wait
./run-gdb start_kernel
....

Due to Python's CLI parsing quicks, if the link:run-gdb[] arguments start with a dash `-`, you have to use the `=` sign, e.g. to <>:

....
./run --gdb --tmux-args=--no-continue
....

Bibliography: https://unix.stackexchange.com/questions/152738/how-to-split-a-new-window-and-run-a-command-in-this-new-window-using-tmux/432111#432111

==== tmux gem5

If you are using gem5 instead of QEMU, `--tmux` has a different effect by default: it opens the gem5 terminal instead of the debugger:

....
./run --emulator gem5 --tmux
....

To open a new pane with GDB instead of the terminal, use:

....
./run --gdb
....

which is equivalent to:

....
./run --emulator gem5 --gdb-wait --tmux --tmux-args start_kernel --tmux-program gdb
....

`--tmux-program` implies `--tmux`, so we can just write:

....
./run --emulator gem5 --gdb-wait --tmux-program gdb
....

If you also want to see both GDB and the terminal with gem5, then you will need to open a separate shell manually as usual with `./gem5-shell`.

From inside tmux, you can create new terminals on a new window with `Ctrl-B C` split a pane yet again vertically with `Ctrl-B %` or horizontally with `Ctrl-B "`.

=== GDB step debug kernel module

https://stackoverflow.com/questions/28607538/how-to-debug-linux-kernel-modules-with-qemu/44095831#44095831

Loadable kernel modules are a bit trickier since the kernel can place them at different memory locations depending on load order.

So we cannot set the breakpoints before `insmod`.

However, the Linux kernel GDB scripts offer the `lx-symbols` command, which takes care of that beautifully for us.

Shell 1:

....
./run
....

Wait for the boot to end and run:

....
insmod timer.ko
....

Source: link:kernel_modules/timer.c[].

This prints a message to dmesg every second.

Shell 2:

....
./run-gdb
....

In GDB, hit `Ctrl-C`, and note how it says:

....
scanning for modules in /root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules
loading @0xffffffffc0000000: /root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules/timer.ko
....

That's `lx-symbols` working! Now simply:

....
break lkmc_timer_callback
continue
continue
continue
....

and we now control the callback from GDB!

Just don't forget to remove your breakpoints after `rmmod`, or they will point to stale memory locations.

TODO: why does `break work_func` for `insmod kthread.ko` not very well? Sometimes it breaks but not others.

[[gdb-step-debug-kernel-module-arm]]
==== GDB step debug kernel module insmodded by init on ARM

TODO on `arm` 51e31cdc2933a774c2a0dc62664ad8acec1d2dbe it does not always work, and `lx-symbols` fails with the message:

....
loading vmlinux
Traceback (most recent call last):
File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 163, in invoke
self.load_all_symbols()
File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 150, in load_all_symbols
[self.load_module_symbols(module) for module in module_list]
File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 110, in load_module_symbols
module_name = module['name'].string()
gdb.MemoryError: Cannot access memory at address 0xbf0000cc
Error occurred in Python command: Cannot access memory at address 0xbf0000cc
....

Can't reproduce on `x86_64` and `aarch64` are fine.

It is kind of random: if you just `insmod` manually and then immediately `./run-gdb --arch arm`, then it usually works.

But this fails most of the time: shell 1:

....
./run --arch arm --eval-after 'insmod hello.ko'
....

shell 2:

....
./run-gdb --arch arm
....

then hit `Ctrl-C` on shell 2, and voila.

Then:

....
cat /proc/modules
....

says that the load address is:

....
0xbf000000
....

so it is close to the failing `0xbf0000cc`.

`readelf`:

....
./run-toolchain readelf -- -s "$(./getvar kernel_modules_build_subdir)/hello.ko"
....

does not give any interesting hits at `cc`, no symbol was placed that far.

[[gdb-module-init]]
==== GDB module_init

TODO find a more convenient method. We have working methods, but they are not ideal.

This is not very easy, since by the time the module finishes loading, and `lx-symbols` can work properly, `module_init` has already finished running!

Possibly asked at:

* https://stackoverflow.com/questions/37059320/debug-a-kernel-module-being-loaded
* https://stackoverflow.com/questions/11888412/debug-the-init-module-call-of-a-linux-kernel-module

[[gdb-module-init-step-into-it]]
===== GDB module_init step into it

This is the best method we've found so far.

The kernel calls `module_init` synchronously, therefore it is not hard to step into that call.

As of 4.16, the call happens in `do_one_initcall`, so we can do in shell 1:

....
./run
....

shell 2 after boot finishes (because there are other calls to `do_init_module` at boot, presumably for the built-in modules):

....
./run-gdb do_one_initcall
....

then step until the line:

....
833 ret = fn();
....

which does the actual call, and then step into it.

For the next time, you can also put a breakpoint there directly:

....
./run-gdb init/main.c:833
....

How we found this out: first we got <> working, and then we did a `bt`. AKA cheating :-)

[[gdb-module-init-calculate-entry-address]]
===== GDB module_init calculate entry address

This works, but is a bit annoying.

The key observation is that the load address of kernel modules is deterministic: there is a pre allocated memory region https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt "module mapping space" filled from bottom up.

So once we find the address the first time, we can just reuse it afterwards, as long as we don't modify the module.

Do a fresh boot and get the module:

....
./run --eval-after './pr_debug.sh;insmod fops.ko;./linux/poweroff.out'
....

The boot must be fresh, because the load address changes every time we insert, even after removing previous modules.

The base address shows on terminal:

....
0xffffffffc0000000 .text
....

Now let's find the offset of `myinit`:

....
./run-toolchain readelf -- \
-s "$(./getvar kernel_modules_build_subdir)/fops.ko" | \
grep myinit
....

which gives:

....
30: 0000000000000240 43 FUNC LOCAL DEFAULT 2 myinit
....

so the offset address is `0x240` and we deduce that the function will be placed at:

....
0xffffffffc0000000 + 0x240 = 0xffffffffc0000240
....

Now we can just do a fresh boot on shell 1:

....
./run --eval 'insmod fops.ko;./linux/poweroff.out' --gdb-wait
....

and on shell 2:

....
./run-gdb '*0xffffffffc0000240'
....

GDB then breaks, and `lx-symbols` works.

[[gdb-module-init-break-at-the-end-of-sys-init-module]]
===== GDB module_init break at the end of sys_init_module

TODO not working. This could be potentially very convenient.

The idea here is to break at a point late enough inside `sys_init_module`, at which point `lx-symbols` can be called and do its magic.

Beware that there are both `sys_init_module` and `sys_finit_module` syscalls, and `insmod` uses `fmodule_init` by default.

Both call `do_module_init` however, which is what `lx-symbols` hooks to.

If we try:

....
b sys_finit_module
....

then hitting:

....
n
....

does not break, and insertion happens, likely because of optimizations? <>

Then we try:

....
b do_init_module
....

A naive:

....
fin
....

also fails to break!

Finally, in despair we notice that <> prints the kernel load address as explained at <>.

So, if we set a breakpoint just after that message is printed by searching where that happens on the Linux source code, we must be able to get the correct load address before `init_module` happens.

[[gdb-module-init-add-trap-instruction]]
===== GDB module_init add trap instruction

This is another possibility: we could modify the module source by adding a trap instruction of some kind.

This appears to be described at: https://www.linuxjournal.com/article/4525

But it refers to a `gdbstart` script which is not in the tree anymore and beyond my `git log` capabilities.

And just adding:

....
asm( " int $3");
....

directly gives an <> as I'd expect.

==== Bypass lx-symbols

Useless, but a good way to show how hardcore you are. Disable `lx-symbols` with:

....
./run-gdb --no-lxsymbols
....

From inside guest:

....
insmod timer.ko
cat /proc/modules
....

as mentioned at:

* https://stackoverflow.com/questions/6384605/how-to-get-address-of-a-kernel-module-loaded-using-insmod/6385818
* https://unix.stackexchange.com/questions/194405/get-base-address-and-size-of-a-loaded-kernel-module

This will give a line of form:

....
fops 2327 0 - Live 0xfffffffa00000000
....

And then tell GDB where the module was loaded with:

....
Ctrl-C
add-symbol-file ../../../rootfs_overlay/x86_64/timer.ko 0xffffffffc0000000
0xffffffffc0000000
....

Alternatively, if the module panics before you can read `/proc/modules`, there is a <> which shows the load address:

....
echo 8 > /proc/sys/kernel/printk
echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control
./linux/myinsmod.out hello.ko
....

And then search for a line of type:

....
[ 84.877482] 0xfffffffa00000000 .text
....

Tested on 4f4749148273c282e80b58c59db1b47049e190bf + 1.

=== GDB step debug early boot

TODO successfully debug the very first instruction that the Linux kernel runs, before `start_kernel`!

Break at the very first instruction executed by QEMU:

....
./run-gdb --no-continue
....

Note however that early boot parts appear to be relocated in memory somehow, and therefore:

* you won't see the source location in GDB, only assembly
* you won't be able to break by symbol in those early locations

Further discussion at: <>.

In the specific case of gem5 aarch64 at least:

* gem5 relocates the kernel in memory to a fixed location, see e.g. https://gem5.atlassian.net/browse/GEM5-787
* `--param 'system.workload.early_kernel_symbols=True` should in theory duplicate the symbols to the correct physical location, but it was broken at one point: https://gem5.atlassian.net/browse/GEM5-785
* gem5 executes directly from vmlinux, so there is no decompression code involved, so you actually immediately start running the "true" first instruction from `head.S` as described at: https://stackoverflow.com/questions/18266063/does-linux-kernel-have-main-function/33422401#33422401
* once the MMU gets turned on at kernel symbol `__primary_switched`, the virtual address matches the ELF symbols, and you start seeing correct symbols without the need for `early_kernel_symbols`. This can be observed clearly with `function_trace = True`: https://stackoverflow.com/questions/64049487/how-to-trace-executed-guest-function-symbol-names-with-their-timestamp-in-gem5/64049488#64049488 which produces:
+
....
0: _kernel_flags_le_lo32 (12500)
12500: __crc_tcp_add_backlog (1000)
13500: __crc_crypto_alg_tested (6500)
20000: __crc_tcp_add_backlog (10000)
30000: __crc_crypto_alg_tested (500)
30500: __crc_scsi_is_host_device (5000)
35500: __crc_crypto_alg_tested (1500)
37000: __crc_scsi_is_host_device (4000)
41000: __crc_crypto_alg_tested (3000)
44000: __crc_tcp_add_backlog (263500)
307500: __crc_crypto_alg_tested (975500)
1283000: __crc_tcp_add_backlog (77191500)
78474500: __crc_crypto_alg_tested (1000)
78475500: __crc_scsi_is_host_device (19500)
78495000: __crc_crypto_alg_tested (500)
78495500: __crc_scsi_is_host_device (13500)
78509000: __primary_switched (14000)
78523000: memset (21118000)
99641000: __primary_switched (2500)
99643500: start_kernel (11000)
....
+
so we see that `__primary_switched` is the first non-trash symbol (non-`__crc_*` and non-`_kernel_flags_*`, which are just informative symbols, not actual executable code)

==== Linux kernel entry point

TODO https://stackoverflow.com/questions/2589845/what-are-the-first-operations-that-the-linux-kernel-executes-on-boot

As mentioned at: <>, the very first kernel instructions executed appear to be placed into memory at a different location than that of the kernel ELF section.

As a result, we are unable to break on early symbols such as:

....
./run-gdb extract_kernel
./run-gdb main
....

<>>> however does show the right symbols however! This could be because <>, which QEMU uses the compressed version, and as mentioned on the Stack Overflow answer, the entry point is actually a tiny decompresser routine.

I also tried to hack `run-gdb` with:

....
@@ -81,7 +81,7 @@ else
${gdb} \
-q \\
-ex 'add-auto-load-safe-path $(pwd)' \\
--ex 'file vmlinux' \\
+-ex 'file arch/arm/boot/compressed/vmlinux' \\
-ex 'target remote localhost:${port}' \\
${brk} \
-ex 'continue' \\
....

and no I do have the symbols from `arch/arm/boot/compressed/vmlinux'`, but the breaks still don't work.

v4.19 also added a `CONFIG_HAVE_KERNEL_UNCOMPRESSED=y` option for having the kernel uncompressed which could make following the startup easier, but it is only available on s390. `aarch64` however is already uncompressed by default, so might be the easiest one. See also: xref:vmlinux-vs-bzimage-vs-zimage-vs-image[xrefstyle=full].

You then need the associated `KERNEL_UNCOMPRESSED` to enable it if available:

....
config KERNEL_UNCOMPRESSED
bool "None"
depends on HAVE_KERNEL_UNCOMPRESSED
....

===== arm64 secondary CPU entry point

In gem5 aarch64 Linux v4.18, experimentally the entry point of secondary CPUs seems to be `secondary_holding_pen` as shown at https://gist.github.com/cirosantilli2/34a7bc450fcb6c1c1a910369be1fdd90

What happens is that:

* the bootloader goes in in WFE
* the kernel writes the entry point to the secondary CPU (the address of `secondary_holding_pen`) with CPU0 at the address given to the kernel in the `cpu-release-addr` of the DTB
* the kernel wakes up the bootloader with a SEV, and the bootloader boots to the address the kernel told it

The CPU0 action happens at: https://github.com/cirosantilli/linux/blob/v5.7/arch/arm64/kernel/smp_spin_table.c[]:

Here's the code that writes the address and does SEV:

....
static int smp_spin_table_cpu_prepare(unsigned int cpu)
{
__le64 __iomem *release_addr;

if (!cpu_release_addr[cpu])
return -ENODEV;

/*
* The cpu-release-addr may or may not be inside the linear mapping.
* As ioremap_cache will either give us a new mapping or reuse the
* existing linear mapping, we can use it to cover both cases. In
* either case the memory will be MT_NORMAL.
*/
release_addr = ioremap_cache(cpu_release_addr[cpu],
sizeof(*release_addr));
if (!release_addr)
return -ENOMEM;

/*
* We write the release address as LE regardless of the native
* endianess of the kernel. Therefore, any boot-loaders that
* read this address need to convert this address to the
* boot-loader's endianess before jumping. This is mandated by
* the boot protocol.
*/
writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
__flush_dcache_area((__force void *)release_addr,
sizeof(*release_addr));

/*
* Send an event to wake up the secondary CPU.
*/
sev();
....

and here's the code that reads the value from the DTB:

....
static int smp_spin_table_cpu_init(unsigned int cpu)
{
struct device_node *dn;
int ret;

dn = of_get_cpu_node(cpu, NULL);
if (!dn)
return -ENODEV;

/*
* Determine the address from which the CPU is polling.
*/
ret = of_property_read_u64(dn, "cpu-release-addr",
&cpu_release_addr[cpu]);
....

==== Linux kernel arch-agnostic entry point

`start_kernel` is the first C function to be executed basically: https://stackoverflow.com/questions/18266063/does-kernel-have-main-function/33422401#33422401

For the earlier arch-specific entry point, see: <>.

==== Linux kernel early boot messages

When booting Linux on a slow emulator like <>, what you observe is that:

* first nothing shows for a while
* then at once, a bunch of message lines show at once followed on aarch64 Linux 5.4.3 by:
+
....
[ 0.081311] printk: console [ttyAMA0] enabled
....

This means of course that all the previous messages had been generated earlier and stored, but were only printed to the terminal once the terminal itself was enabled.

Notably for example the very first message:

....
[ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd070]
....

happens very early in the boot process.

If you get a failure before that, it will be hard to see the print messages.

One possible solution is to parse the dmesg buffer, gem5 actually implements that: <>.

=== GDB step debug userland processes

QEMU's `-gdb` GDB breakpoints are set on virtual addresses, so you can in theory debug userland processes as well.

* https://stackoverflow.com/questions/26271901/is-it-possible-to-use-gdb-and-qemu-to-debug-linux-user-space-programs-and-kernel
* https://stackoverflow.com/questions/16273614/debug-init-on-qemu-using-gdb

You will generally want to use <> for this as it is more reliable, but this method can overcome the following limitations of `gdbserver`:

* the emulator does not support host to guest networking. This seems to be the case for gem5 as explained at: xref:gem5-host-to-guest-networking[xrefstyle=full]
* cannot see the start of the `init` process easily
* `gdbserver` alters the working of the kernel, and makes your run less representative

Known limitations of direct userland debugging:

* the kernel might switch context to another process or to the kernel itself e.g. on a system call, and then TODO confirm the PIC would go to weird places and source code would be missing.
+
Solutions to this are being researched at: xref:lx-ps[xrefstyle=full].
* TODO step into shared libraries. If I attempt to load them explicitly:
+
....
(gdb) sharedlibrary ../../staging/lib/libc.so.0
No loaded shared libraries match the pattern `../../staging/lib/libc.so.0'.
....
+
since GDB does not know that libc is loaded.

==== GDB step debug userland custom init

This is the userland debug setup most likely to work, since at init time there is only one userland executable running.

For executables from the link:userland/[] directory such as link:userland/posix/count.c[]:

* Shell 1:
+
....
./run --gdb-wait --kernel-cli 'init=/lkmc/posix/count.out'
....
* Shell 2:
+
....
./run-gdb --userland userland/posix/count.c main
....
+
Alternatively, we could also pass the full path to the executable:
+
....
./run-gdb --userland "$(./getvar userland_build_dir)/posix/count.out" main
....
+
Path resolution is analogous to <>.

Then, as soon as boot ends, we are left inside a debug session that looks just like what `gdbserver` would produce.

==== GDB step debug userland BusyBox init

BusyBox custom init process:

* Shell 1:
+
....
./run --gdb-wait --kernel-cli 'init=/bin/ls'
....
* Shell 2:
+
....
./run-gdb --userland "$(./getvar buildroot_build_build_dir)"/busybox-*/busybox ls_main
....

This follows BusyBox' convention of calling the main for each executable as `_main` since the `busybox` executable has many "mains".

BusyBox default init process:

* Shell 1:
+
....
./run --gdb-wait
....
* Shell 2:
+
....
./run-gdb --userland "$(./getvar buildroot_build_build_dir)"/busybox-*/busybox init_main
....

`init` cannot be debugged with <> without modifying the source, or else `/sbin/init` exits early with:

....
"must be run as PID 1"
....

==== GDB step debug userland non-init

Non-init process:

* Shell 1:
+
....
./run --gdb-wait
....
* Shell 2:
+
....
./run-gdb --userland userland/linux/rand_check.c main
....
* Shell 1 after the boot finishes:
+
....
./linux/rand_check.out
....

This is the least reliable setup as there might be other processes that use the given virtual address.

[[gdb-step-debug-userland-non-init-without-gdb-wait]]
===== GDB step debug userland non-init without --gdb-wait

TODO: if I try <> without `--gdb-wait` and the `break main` that we do inside `./run-gdb` says:

....
Cannot access memory at address 0x10604
....

and then GDB never breaks. Tested at ac8663a44a450c3eadafe14031186813f90c21e4 + 1.

The exact behaviour seems to depend on the architecture:

* `arm`: happens always
* `x86_64`: appears to happen only if you try to connect GDB as fast as possible, before init has been reached.
* `aarch64`: could not observe the problem

We have also double checked the address with:

....
./run-toolchain --arch arm readelf -- \
-s "$(./getvar --arch arm userland_build_dir)/linux/myinsmod.out" | \
grep main
....

and from GDB:

....
info line main
....

and both give:

....
000105fc
....

which is just 8 bytes before `0x10604`.

`gdbserver` also says `0x10604`.

However, if do a `Ctrl-C` in GDB, and then a direct:

....
b *0x000105fc
....

it works. Why?!

On GEM5, x86 can also give the `Cannot access memory at address`, so maybe it is also unreliable on QEMU, and works just by coincidence.

=== GDB call

GDB can call functions as explained at: https://stackoverflow.com/questions/1354731/how-to-evaluate-functions-in-gdb

However this is failing for us:

* some symbols are not visible to `call` even though `b` sees them
* for those that are, `call` fails with an E14 error

E.g.: if we break on `__x64_sys_write` on `count.sh`:

....
>>> call printk(0, "asdf")
Could not fetch register "orig_rax"; remote failure reply 'E14'
>>> b printk
Breakpoint 2 at 0xffffffff81091bca: file kernel/printk/printk.c, line 1824.
>>> call fdget_pos(fd)
No symbol "fdget_pos" in current context.
>>> b fdget_pos
Breakpoint 3 at 0xffffffff811615e3: fdget_pos. (9 locations)
>>>
....

even though `fdget_pos` is the first thing `__x64_sys_write` does:

....
581 SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf,
582 size_t, count)
583 {
584 struct fd f = fdget_pos(fd);
....

I also noticed that I get the same error:

....
Could not fetch register "orig_rax"; remote failure reply 'E14'
....

when trying to use:

....
fin
....

on many (all?) functions.

See also: https://github.com/cirosantilli/linux-kernel-module-cheat/issues/19

=== GDB view ARM system registers

`info all-registers` shows some of them.

The implementation is described at: https://stackoverflow.com/questions/46415059/how-to-observe-aarch64-system-registers-in-qemu/53043044#53043044

=== GDB step debug multicore userland

For a more minimal baremetal multicore setup, see: xref:arm-baremetal-multicore[xrefstyle=full].

We can set and get which cores the Linux kernel allows a program to run on with `sched_getaffinity` and `sched_setaffinity`:

....
./run --cpus 2 --eval-after './linux/sched_getaffinity.out'
....

Source: link:userland/linux/sched_getaffinity.c[]

Sample output:

....
sched_getaffinity = 1 1
sched_getcpu = 1
sched_getaffinity = 1 0
sched_getcpu = 0
....

Which shows us that:

* initially:
** all 2 cores were enabled as shown by `sched_getaffinity = 1 1`
** the process was randomly assigned to run on core 1 (the second one) as shown by `sched_getcpu = 1`. If we run this several times, it will also run on core 0 sometimes.
* then we restrict the affinity to just core 0, and we see that the program was actually moved to core 0

The number of cores is modified as explained at: xref:number-of-cores[xrefstyle=full]

`taskset` from the util-linux package sets the initial core affinity of a program:

....
./build-buildroot \
--config 'BR2_PACKAGE_UTIL_LINUX=y' \
--config 'BR2_PACKAGE_UTIL_LINUX_SCHEDUTILS=y' \
;
./run --eval-after 'taskset -c 1,1 ./linux/sched_getaffinity.out'
....

output:

....
sched_getaffinity = 0 1
sched_getcpu = 1
sched_getaffinity = 1 0
sched_getcpu = 0
....

so we see that the affinity was restricted to the second core from the start.

Let's do a QEMU observation to justify this example being in the repository with <>.

We will run our `./linux/sched_getaffinity.out` infinitely many times, on core 0 and core 1 alternatively:

....
./run \
--cpus 2 \
--eval-after 'i=0; while true; do taskset -c $i,$i ./linux/sched_getaffinity.out; i=$((! $i)); done' \
--gdb-wait \
;
....

on another shell:

....
./run-gdb --userland "$(./getvar userland_build_dir)/linux/sched_getaffinity.out" main
....

Then, inside GDB:

....
(gdb) info threads
Id Target Id Frame
* 1 Thread 1 (CPU#0 [running]) main () at sched_getaffinity.c:30
2 Thread 2 (CPU#1 [halted ]) native_safe_halt () at ./arch/x86/include/asm/irqflags.h:55
(gdb) c
(gdb) info threads
Id Target Id Frame
1 Thread 1 (CPU#0 [halted ]) native_safe_halt () at ./arch/x86/include/asm/irqflags.h:55
* 2 Thread 2 (CPU#1 [running]) main () at sched_getaffinity.c:30
(gdb) c
....

and we observe that `info threads` shows the actual correct core on which the process was restricted to run by `taskset`!

We should also try it out with kernel modules: https://stackoverflow.com/questions/28347876/set-cpu-affinity-on-a-loadable-linux-kernel-module

TODO we then tried:

....
./run --cpus 2 --eval-after './linux/sched_getaffinity_threads.out'
....

and:

....
./run-gdb --userland "$(./getvar userland_build_dir)/linux/sched_getaffinity_threads.out"
....

to switch between two simultaneous live threads with different affinities, it just didn't break on our threads:

....
b main_thread_0
....

Note that secondary cores in gem5 are kind of broken however: <>.

Bibliography:

* https://stackoverflow.com/questions/10490756/how-to-use-sched-getaffinity-and-sched-setaffinity-in-linux-from-c/50117787#50117787
** https://stackoverflow.com/questions/663958/how-to-control-which-core-a-process-runs-on/50210009#50210009
** https://stackoverflow.com/questions/280909/cpu-affinity/54478296#54478296
** https://unix.stackexchange.com/questions/73/how-can-i-set-the-processor-affinity-of-a-process-on-linux/441098#441098 (summary only)
* https://stackoverflow.com/questions/42800801/how-to-use-gdb-to-debug-qemu-with-smp-symmetric-multiple-processors

=== Linux kernel GDB scripts

We source the Linux kernel GDB scripts by default for `lx-symbols`, but they also contains some other goodies worth looking into.

Those scripts basically parse some in-kernel data structures to offer greater visibility with GDB.

All defined commands are prefixed by `lx-`, so to get a full list just try to tab complete that.

There aren't as many as I'd like, and the ones that do exist are pretty self explanatory, but let's give a few examples.

Show dmesg:

....
lx-dmesg
....

Show the <>:

....
lx-cmdline
....

Dump the device tree to a `fdtdump.dtb` file in the current directory:

....
lx-fdtdump
pwd
....

List inserted kernel modules:

....
lx-lsmod
....

Sample output:

....
Address Module Size Used by
0xffffff80006d0000 hello 16384 0
....

Bibliography:

* https://events.static.linuxfound.org/sites/events/files/slides/Debugging%20the%20Linux%20Kernel%20with%20GDB.pdf
* https://wiki.linaro.org/LandingTeams/ST/GDB

==== lx-ps

List all processes:

....
lx-ps
....

Sample output:

....
0xffff88000ed08000 1 init
0xffff88000ed08ac0 2 kthreadd
....

The second and third fields are obviously PID and process name.

The first one is more interesting, and contains the address of the `task_struct` in memory.

This can be confirmed with:

....
p ((struct task_struct)*0xffff88000ed08000
....

which contains the correct PID for all threads I've tried:

....
pid = 1,
....

TODO get the PC of the kthreads: https://stackoverflow.com/questions/26030910/find-program-counter-of-process-in-kernel Then we would be able to see where the threads are stopped in the code!

On ARM, I tried:

....
task_pt_regs((struct thread_info *)((struct task_struct)*0xffffffc00e8f8000))->uregs[ARM_pc]
....

but `task_pt_regs` is a `#define` and GDB cannot see defines without `-ggdb3`: https://stackoverflow.com/questions/2934006/how-do-i-print-a-defined-constant-in-gdb which are apparently not set?

Bibliography:

* https://stackoverflow.com/questions/9561546/thread-aware-gdb-for-kernel
* https://wiki.linaro.org/LandingTeams/ST/GDB
* https://events.static.linuxfound.org/sites/events/files/slides/Debugging%20the%20Linux%20Kernel%20with%20GDB.pdf presentation: https://www.youtube.com/watch?v=pqn5hIrz3A8

[[config-pid-in-contextidr]]
===== CONFIG_PID_IN_CONTEXTIDR

https://stackoverflow.com/questions/54133479/accessing-logical-software-thread-id-in-gem5 on ARM the kernel can store an indication of PID in the CONTEXTIDR_EL1 register, making that much easier to observe from simulators.

In particular, gem5 prints that number out by default on `ExecAll` messages!

Let's test it out with <> + <>:

....
./build-linux --arch aarch64 --linux-build-id CONFIG_PID_IN_CONTEXTIDR --config 'CONFIG_PID_IN_CONTEXTIDR=y'
# Checkpoint run.
./run --arch aarch64 --emulator gem5 --linux-build-id CONFIG_PID_IN_CONTEXTIDR --eval './gem5.sh'
# Trace run.
./run \
--arch aarch64 \
--emulator gem5 \
--gem5-readfile 'posix/getpid.out; posix/getpid.out' \
--gem5-restore 1 \
--linux-build-id CONFIG_PID_IN_CONTEXTIDR \
--trace FmtFlag,ExecAll,-ExecSymbol \
;
....

The terminal runs both programs which output their PID to stdout:

....
pid=44
pid=45
....

By quickly inspecting the `trace.txt` file, we immediately notice that the `system.cpu: A` part of the logs, which used to always be `system.cpu: A0`, now has a few different values! Nice!

We can briefly summarize those values by removing repetitions:

....
cut -d' ' -f4 "$(./getvar --arch aarch64 --emulator gem5 trace_txt_file)" | uniq -c
....

gives:

....
97227 A39
147476 A38
222052 A40
1 terminal
1117724 A40
27529 A31
43868 A40
27487 A31
138349 A40
13781 A38
231246 A40
25536 A38
28337 A40
214799 A38
963561 A41
92603 A38
27511 A31
224384 A38
564949 A42
182360 A38
729009 A43
8398 A23
20200 A10
636848 A43
187995 A44
27529 A31
70071 A44
16981 A0
623806 A44
16981 A0
139319 A44
24487 A0
174986 A44
25420 A0
89611 A44
16981 A0
183184 A44
24728 A0
89608 A44
17226 A0
899075 A44
24974 A0
250608 A44
137700 A43
1497997 A45
227485 A43
138147 A38
482646 A46
....

I'm not smart enough to be able to deduce all of those IDs, but we can at least see that:

* A44 and A45 are there as expected from stdout!
* A39 must be the end of the execution of `m5 checkpoint`
* so we guess that A38 is the shell as it comes next
* the weird "terminal" line is `336969745500: system.terminal: attach terminal 0`
* which is the shell PID? I should have printed that as well :-)
* why are there so many other PIDs? This was supposed to be a silent system without daemons!
* A0 is presumably the kernel. However we see process switches without going into A0, so I'm not sure how, it appears to count kernel instructions as part of processes
* A46 has to be the `m5 exit` call

Or if you want to have some real fun, try: link:baremetal/arch/aarch64/contextidr_el1.c[]:

....
./run --arch aarch64 --emulator gem5 --baremetal baremetal/arch/aarch64/contextidr_el1.c --trace-insts-stdout
....

in which we directly set the register ourselves! Output excerpt:

....
31500: system.cpu: A0 T0 : @main+12 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000001 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
32000: system.cpu: A1 T0 : @main+16 : msr contextidr_el1, x0 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsSerializeAfter|IsNonSpeculative)
32500: system.cpu: A1 T0 : @main+20 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000001 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
33000: system.cpu: A1 T0 : @main+24 : add w0, w0, #1 : IntAlu : D=0x0000000000000002 flags=(IsInteger)
33500: system.cpu: A1 T0 : @main+28 : str x0, [sp, #12] : MemWrite : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsStore)
34000: system.cpu: A1 T0 : @main+32 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
34500: system.cpu: A1 T0 : @main+36 : subs w0, #9 : IntAlu : D=0x0000000000000000 flags=(IsInteger)
35000: system.cpu: A1 T0 : @main+40 : b.le : IntAlu : flags=(IsControl|IsDirectControl|IsCondControl)
35500: system.cpu: A1 T0 : @main+12 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
36000: system.cpu: A2 T0 : @main+16 : msr contextidr_el1, x0 : IntAlu : D=0x0000000000000002 flags=(IsInteger|IsSerializeAfter|IsNonSpeculative)
36500: system.cpu: A2 T0 : @main+20 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
37000: system.cpu: A2 T0 : @main+24 : add w0, w0, #1 : IntAlu : D=0x0000000000000003 flags=(IsInteger)
37500: system.cpu: A2 T0 : @main+28 : str x0, [sp, #12] : MemWrite : D=0x0000000000000003 A=0x82fffffc flags=(IsInteger|IsMemRef|IsStore)
38000: system.cpu: A2 T0 : @main+32 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000003 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
38500: system.cpu: A2 T0 : @main+36 : subs w0, #9 : IntAlu : D=0x0000000000000000 flags=(IsInteger)
39000: system.cpu: A2 T0 : @main+40 : b.le : IntAlu : flags=(IsControl|IsDirectControl|IsCondControl)
39500: system.cpu: A2 T0 : @main+12 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000003 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad)
40000: system.cpu: A3 T0 : @main+16 : msr contextidr_el1, x0 : IntAlu : D=0x0000000000000003 flags=(IsInteger|IsSerializeAfter|IsNonSpeculative)
....

<> D13.2.27 "CONTEXTIDR_EL1, Context ID Register (EL1)" documents `CONTEXTIDR_EL1` as:

____
Identifies the current Process Identifier.

The value of the whole of this register is called the Context ID and is used by:

* The debug logic, for Linked and Unlinked Context ID matching.
* The trace logic, to identify the current process.

The significance of this register is for debug and trace use only.
____

Tested on 145769fc387dc5ee63ec82e55e6b131d9c968538 + 1.

=== Debug the GDB remote protocol

For when it breaks again, or you want to add a new feature!

....
./run --debug
./run-gdb --before '-ex "set remotetimeout 99999" -ex "set debug remote 1"' start_kernel
....

See also: https://stackoverflow.com/questions/13496389/gdb-remote-protocol-how-to-analyse-packets

[[remote-g-packet]]
==== Remote 'g' packet reply is too long

This error means that the GDB server, e.g. in QEMU, sent more registers than the GDB client expected.

This can happen for the following reasons:

* you set the architecture of the client wrong, often 32 vs 64 bit as mentioned at: https://stackoverflow.com/questions/4896316/gdb-remote-cross-debugging-fails-with-remote-g-packet-reply-is-too-long
* there is a bug in the GDB server and the XML description does not match the number of registers actually sent
* the GDB server does not send XML target descriptions and your GDB expects a different number of registers by default. E.g., gem5 d4b3e064adeeace3c3e7d106801f95c14637c12f does not send the XML files

The XML target description format is described a bit further at: https://stackoverflow.com/questions/46415059/how-to-observe-aarch64-system-registers-in-qemu/53043044#53043044

== KGDB

KGDB is kernel dark magic that allows you to GDB the kernel on real hardware without any extra hardware support.

It is useless with QEMU since we already have full system visibility with `-gdb`. So the goal of this setup is just to prepare you for what to expect when you will be in the treches of real hardware.

KGDB is cheaper than JTAG (free) and easier to setup (all you need is serial), but with less visibility as it depends on the kernel working, so e.g.: dies on panic, does not see boot sequence.

First run the kernel with:

....
./run --kgdb
....

this passes the following options on the kernel CLI:

....
kgdbwait kgdboc=ttyS1,115200
....

`kgdbwait` tells the kernel to wait for KGDB to connect.

So the kernel sets things up enough for KGDB to start working, and then boot pauses waiting for connection:

....
<6>[ 4.866050] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
<6>[ 4.893205] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
<6>[ 4.916271] 00:06: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
<6>[ 4.987771] KGDB: Registered I/O driver kgdboc
<2>[ 4.996053] KGDB: Waiting for connection from remote gdb...

Entering kdb (current=0x(____ptrval____), pid 1) on processor 0 due to Keyboard Entry
[0]kdb>
....

KGDB expects the connection at `ttyS1`, our second serial port after `ttyS0` which contains the terminal.

The last line is the KDB prompt, and is covered at: xref:kdb[xrefstyle=full]. Typing now shows nothing because that prompt is expecting input from `ttyS1`.

Instead, we connect to the serial port `ttyS1` with GDB:

....
./run-gdb --kgdb --no-continue
....

Once GDB connects, it is left inside the function `kgdb_breakpoint`.

So now we can set breakpoints and continue as usual.

For example, in GDB:

....
continue
....

Then in QEMU:

....
./count.sh &
./kgdb.sh
....

link:rootfs_overlay/lkmc/kgdb.sh[] pauses the kernel for KGDB, and gives control back to GDB.

And now in GDB we do the usual:

....
break __x64_sys_write
continue
continue
continue
continue
....

And now you can count from KGDB!

If you do: `break __x64_sys_write` immediately after `./run-gdb --kgdb`, it fails with `KGDB: BP remove failed:

`. I think this is because it would break too early on the boot sequence, and KGDB is not yet ready.

See also:

* https://github.com/torvalds/linux/blob/v4.9/Documentation/DocBook/kgdb.tmpl
* https://stackoverflow.com/questions/22004616/qemu-kernel-debugging-with-kgdb/44197715#44197715

=== KGDB ARM

TODO: we would need a second serial for KGDB to work, but it is not currently supported on `arm` and `aarch64` with `-M virt` that we use: https://unix.stackexchange.com/questions/479085/can-qemu-m-virt-on-arm-aarch64-have-multiple-serial-ttys-like-such-as-pl011-t/479340#479340

One possible workaround for this would be to use <>.

Main more generic question: https://stackoverflow.com/questions/14155577/how-to-use-kgdb-on-arm

=== KGDB kernel modules

Just works as you would expect:

....
insmod timer.ko
./kgdb.sh
....

In GDB:

....
break lkmc_timer_callback
continue
continue
continue
....

and you now control the count.

=== KDB

KDB is a way to use KDB directly in your main console, without GDB.

Advantage over KGDB: you can do everything in one serial. This can actually be important if you only have one serial for both shell and .

Disadvantage: not as much functionality as GDB, especially when you use Python scripts. Notably, TODO confirm you can't see the the kernel source code and line step as from GDB, since the kernel source is not available on guest (ah, if only debugging information supported full source, or if the kernel had a crazy mechanism to embed it).

Run QEMU as:

....
./run --kdb
....

This passes `kgdboc=ttyS0` to the Linux CLI, therefore using our main console. Then QEMU:

....
[0]kdb> go
....

And now the `kdb>` prompt is responsive because it is listening to the main console.

After boot finishes, run the usual:

....
./count.sh &
./kgdb.sh
....

And you are back in KDB. Now you can count with:

....
[0]kdb> bp __x64_sys_write
[0]kdb> go
[0]kdb> go
[0]kdb> go
[0]kdb> go
....

And you will break whenever `__x64_sys_write` is hit.

You can get see further commands with:

....
[0]kdb> help
....

The other KDB commands allow you to step instructions, view memory, registers and some higher level kernel runtime data similar to the superior GDB Python scripts.

==== KDB graphic

You can also use KDB directly from the <> window with:

....
./run --graphic --kdb
....

This setup could be used to debug the kernel on machines without serial, such as modern desktops.

This works because `--graphics` adds `kbd` (which stands for `KeyBoarD`!) to `kgdboc`.

==== KDB ARM

TODO neither `arm` and `aarch64` are working as of 1cd1e58b023791606498ca509256cc48e95e4f5b + 1.

`arm` seems to place and hit the breakpoint correctly, but no matter how many `go` commands I do, the `count.sh` stdout simply does not show.

`aarch64` seems to place the breakpoint correctly, but after the first `go` the kernel oopses with warning:

....
WARNING: CPU: 0 PID: 46 at /root/linux-kernel-module-cheat/submodules/linux/kernel/smp.c:416 smp_call_function_many+0xdc/0x358
....

and stack trace:

....
smp_call_function_many+0xdc/0x358
kick_all_cpus_sync+0x30/0x38
kgdb_flush_swbreak_addr+0x3c/0x48
dbg_deactivate_sw_breakpoints+0x7c/0xb8
kgdb_cpu_enter+0x284/0x6a8
kgdb_handle_exception+0x138/0x240
kgdb_brk_fn+0x2c/0x40
brk_handler+0x7c/0xc8
do_debug_exception+0xa4/0x1c0
el1_dbg+0x18/0x78
__arm64_sys_write+0x0/0x30
el0_svc_handler+0x74/0x90
el0_svc+0x8/0xc
....

My theory is that every serious ARM developer has JTAG, and no one ever tests this, and the kernel code is just broken.

== gdbserver

Step debug userland processes to understand how they are talking to the kernel.

First build `gdbserver` into the root filesystem:

....
./build-buildroot --config 'BR2_PACKAGE_GDB=y'
....

Then on guest, to debug link:userland/linux/rand_check.c[]:

....
./gdbserver.sh ./c/command_line_arguments.out asdf qwer
....

Source: link:rootfs_overlay/lkmc/gdbserver.sh[].

And on host:

....
./run-gdb --gdbserver --userland userland/c/command_line_arguments.c main
....

or alternatively with the path to the executable itself:

....
./run --gdbserver --userland "$(./getvar userland_build_dir)/c/command_line_arguments.out"
....

Bibliography: https://reverseengineering.stackexchange.com/questions/8829/cross-debugging-for-arm-mips-elf-with-qemu-toolchain/16214#16214

=== gdbserver BusyBox

Analogous to <>:

....
./gdbserver.sh ls
....

on host you need:

....
./run-gdb --gdbserver --userland "$(./getvar buildroot_build_build_dir)"/busybox-*/busybox ls_main
....

=== gdbserver libc

Our setup gives you the rare opportunity to step debug libc and other system libraries.

For example in the guest:

....
./gdbserver.sh ./posix/count.out
....

Then on host:

....
./run-gdb --gdbserver --userland userland/posix/count.c main
....

and inside GDB:

....
break sleep
continue
....

And you are now left inside the `sleep` function of our default libc implementation uclibc https://cgit.uclibc-ng.org/cgi/cgit/uclibc-ng.git/tree/libc/unistd/sleep.c?h=v1.0.30#n91[`libc/unistd/sleep.c`]!

You can also step into the `sleep` call:

....
step
....

This is made possible by the GDB command that we use by default:

....
set sysroot ${common_buildroot_build_dir}/staging
....

which automatically finds unstripped shared libraries on the host for us.

See also: https://stackoverflow.com/questions/8611194/debugging-shared-libraries-with-gdbserver/45252113#45252113

=== gdbserver dynamic loader

TODO: try to step debug the dynamic loader. Would be even easier if `starti` is available: https://stackoverflow.com/questions/10483544/stopping-at-the-first-machine-code-instruction-in-gdb

Bibliography: https://stackoverflow.com/questions/20114565/gdb-step-into-dynamic-linkerld-so-code

== CPU architecture

The portability of the kernel and toolchains is amazing: change an option and most things magically work on completely different hardware.

To use `arm` instead of x86 for example:

....
./build-buildroot --arch arm
./run --arch arm
....

Debug:

....
./run --arch arm --gdb-wait
# On another terminal.
./run-gdb --arch arm
....

We also have one letter shorthand names for the architectures and `--arch` option:

....
# aarch64
./run -a A
# arm
./run -a a
# x86_64
./run -a x
....

Known quirks of the supported architectures are documented in this section.

[[x86-64]]
=== x86_64

==== ring0

This example illustrates how reading from the x86 control registers with `mov crX, rax` can only be done from kernel land on ring0.

From kernel land:

....
insmod ring0.ko
....

works and output the registers, for example:

....
cr0 = 0xFFFF880080050033
cr2 = 0xFFFFFFFF006A0008
cr3 = 0xFFFFF0DCDC000
....

However if we try to do it from userland:

....
./ring0.out
....

stdout gives:

....
Segmentation fault
....

and dmesg outputs:

....
traps: ring0.out[55] general protection ip:40054c sp:7fffffffec20 error:0 in ring0.out[400000+1000]
....

Sources:

* link:kernel_modules/ring0.c[]
* link:lkmc/ring0.h[]
* link:userland/arch/x86_64/ring0.c[]

In both cases, we attempt to run the exact same code which is shared on the `ring0.h` header file.

Bibliography:

* https://stackoverflow.com/questions/7415515/how-to-access-the-control-registers-cr0-cr2-cr3-from-a-program-getting-segmenta/7419306#7419306
* https://stackoverflow.com/questions/18717016/what-are-ring-0-and-ring-3-in-the-context-of-operating-systems/44483439#44483439

=== arm

==== Run arm executable in aarch64

TODO Can you run arm executables in the aarch64 guest? https://stackoverflow.com/questions/22460589/armv8-running-legacy-32-bit-applications-on-64-bit-os/51466709#51466709

I've tried:

....
./run-toolchain --arch aarch64 gcc -- -static ~/test/hello_world.c -o "$(./getvar p9_dir)/a.out"
./run --arch aarch64 --eval-after '/mnt/9p/data/a.out'
....

but it fails with:

....
a.out: line 1: syntax error: unexpected word (expecting ")")
....

=== MIPS

We used to "support" it until f8c0502bb2680f2dbe7c1f3d7958f60265347005 (it booted) but dropped since one was testing it often.

If you want to revive and maintain it, send a pull request.

=== Other architectures

It should not be too hard to port this repository to any architecture that Buildroot supports. Pull requests are welcome.

== init

When the Linux kernel finishes booting, it runs an executable as the first and only userland process. This executable is called the `init` program.

The init process is then responsible for setting up the entire userland (or destroying everything when you want to have fun).

This typically means reading some configuration files (e.g. `/etc/initrc`) and forking a bunch of userland executables based on those files, including the very interactive shell that we end up on.

systemd provides a "popular" init implementation for desktop distros as of 2017.

BusyBox provides its own minimalistic init implementation which Buildroot, and therefore this repo, uses by default.

The `init` program can be either an executable shell text file, or a compiled ELF file. It becomes easy to accept this once you see that the `exec` system call handles both cases equally: https://unix.stackexchange.com/questions/174062/can-the-init-process-be-a-shell-script-in-linux/395375#395375

The `init` executable is searched for in a list of paths in the root filesystem, including `/init`, `/sbin/init` and a few others. For more details see: xref:path-to-init[xrefstyle=full]

=== Replace init

To have more control over the system, you can replace BusyBox's init with your own.

The most direct way to replace `init` with our own is to just use the `init=` <> directly:

....
./run --kernel-cli 'init=/lkmc/count.sh'
....

This just counts every second forever and does not give you a shell.

This method is not very flexible however, as it is hard to reliably pass multiple commands and command line arguments to the init with it, as explained at: xref:init-environment[xrefstyle=full].

For this reason, we have created a more robust helper method with the `--eval` option:

....
./run --eval 'echo "asdf qwer";insmod hello.ko;./linux/poweroff.out'
....

It is basically a shortcut for:

....
./run --kernel-cli 'init=/lkmc/eval_base64.sh - lkmc_eval="insmod hello.ko;./linux/poweroff.out"'
....

Source: link:rootfs_overlay/lkmc/eval_base64.sh[].

This allows quoting and newlines by base64 encoding on host, and decoding on guest, see: xref:kernel-command-line-parameters-escaping[xrefstyle=full].

It also automatically chooses between `init=` and `rcinit=` for you, see: xref:path-to-init[xrefstyle=full]

`--eval` replaces BusyBox' init completely, which makes things more minimal, but also has has the following consequences:

* `/etc/fstab` mounts are not done, notably `/proc` and `/sys`, test it out with:
+
....
./run --eval 'echo asdf;ls /proc;ls /sys;echo qwer'
....
* no shell is launched at the end of boot for you to interact with the system. You could explicitly add a `sh` at the end of your commands however:
+
....
./run --eval 'echo hello;sh'
....

The best way to overcome those limitations is to use: xref:init-busybox[xrefstyle=full]

If the script is large, you can add it to a gitignored file and pass that to `--eval` as in:

....
echo '
cd /lkmc
insmod hello.ko
./linux/poweroff.out
' > data/gitignore.sh
./run --eval "$(cat data/gitignore.sh)"
....

or add it to a file to the root filesystem guest and rebuild:

....
echo '#!/bin/sh
cd /lkmc
insmod hello.ko
./linux/poweroff.out
' > rootfs_overlay/lkmc/gitignore.sh
chmod +x rootfs_overlay/lkmc/gitignore.sh
./build-buildroot
./run --kernel-cli 'init=/lkmc/gitignore.sh'
....

Remember that if your init returns, the kernel will panic, there are just two non-panic possibilities:

* run forever in a loop or long sleep
* `poweroff` the machine

==== poweroff.out

Just using BusyBox' `poweroff` at the end of the `init` does not work and the kernel panics:

....
./run --eval poweroff
....

because BusyBox' `poweroff` tries to do some fancy stuff like killing init, likely to allow userland to shutdown nicely.

But this fails when we are `init` itself!

BusyBox' `poweroff` works more brutally and effectively if you add `-f`:

....
./run --eval 'poweroff -f'
....

but why not just use our minimal `./linux/poweroff.out` and be done with it?

....
./run --eval './linux/poweroff.out'
....

Source: link:userland/linux/poweroff.c[]

This also illustrates how to shutdown the computer from C: https://stackoverflow.com/questions/28812514/how-to-shutdown-linux-using-c-or-qt-without-call-to-system

[[sleep-forever-out]]
==== sleep_forever.out

I dare you to guess what this does:

....
./run --eval './posix/sleep_forever.out'
....

Source: link:userland/posix/sleep_forever.c[]

This executable is a convenient simple init that does not panic and sleeps instead.

[[time-boot-out]]
==== time_boot.out

Get a reasonable answer to "how long does boot take in guest time?":

....
./run --eval-after './linux/time_boot.c'
....

Source: link:userland/linux/time_boot.c[]

That executable writes to `dmesg` directly through `/dev/kmsg` a message of type:

....
[ 2.188242] /path/to/linux-kernel-module-cheat/userland/linux/time_boot.c
....

which tells us that boot took `2.188242` seconds based on the dmesg timestamp.

Bibliography: https://stackoverflow.com/questions/12683169/measure-time-taken-for-linux-kernel-from-bootup-to-userpace/46517014#46517014

[[init-busybox]]
=== Run command at the end of BusyBox init

Use the `--eval-after` option is for you rely on something that BusyBox' init set up for you like `/etc/fstab`:

....
./run --eval-after 'echo asdf;ls /proc;ls /sys;echo qwer'
....

After the commands run, you are left on an interactive shell.

The above command is basically equivalent to:

....
./run --kernel-cli-after-dash 'lkmc_eval="insmod hello.ko;./linux/poweroff.out;"'
....

where the `lkmc_eval` option gets evaled by our default link:rootfs_overlay/etc/init.d/S98[] startup script.

Except that `--eval-after` is smarter and uses `base64` encoding.

Alternatively, you can also add the comamdns to run to a new `init.d` entry to run at the end o the BusyBox init:

....
cp rootfs_overlay/etc/init.d/S98 rootfs_overlay/etc/init.d/S99.gitignore
vim rootfs_overlay/etc/init.d/S99.gitignore
./build-buildroot
./run
....

and they will be run automatically before the login prompt.

Scripts under `/etc/init.d` are run by `/etc/init.d/rcS`, which gets called by the line `::sysinit:/etc/init.d/rcS` in link:rootfs_overlay/etc/inittab[`/etc/inittab`].

=== Path to init

The init is selected at:

* initrd or initramfs system: `/init`, a custom one can be set with the `rdinit=` <>
* otherwise: default is `/sbin/init`, followed by some other paths, a custom one can be set with `init=`

More details: https://unix.stackexchange.com/questions/30414/what-can-make-passing-init-path-to-program-to-the-kernel-not-start-program-as-i/430614#430614

The final init that actually got selected is shown on Linux v5.9.2 a line of type:

```
<6>[ 0.309984] Run /sbin/init as init process
```

at the very end of the boot logs.

=== Init environment

Documented at https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html[]:

____
The kernel parses parameters from the kernel command line up to "-"; if it doesn't recognize a parameter and it doesn't contain a '.', the parameter gets passed to init: parameters with '=' go into init's environment, others are passed as command line arguments to init. Everything after "-" is passed as an argument to init.
____

And you can try it out with:

....
./run --kernel-cli 'init=/lkmc/linux/init_env_poweroff.out' --kernel-cli-after-dash 'asdf=qwer zxcv'
....

From the <>, we see that the kernel CLI at LKMC 69f5745d3df11d5c741551009df86ea6c61a09cf now contains:

....
init=/lkmc/linux/init_env_poweroff.out console=ttyS0 - lkmc_home=/lkmc asdf=qwer zxcv
....

and the init program outputs:

....
args:
/lkmc/linux/init_env_poweroff.out
-
zxcv

env:
HOME=/
TERM=linux
lkmc_home=/lkmc
asdf=qwer
....

Source: link:userland/linux/init_env_poweroff.c[].

As of the Linux kernel v5.7 (possibly earlier, I've skipped a few releases), boot also shows the init arguments and environment very clearly, which is a great addition:

....
<6>[ 0.309984] Run /sbin/init as init process
<7>[ 0.309991] with arguments:
<7>[ 0.309997] /sbin/init
<7>[ 0.310004] nokaslr
<7>[ 0.310010] -
<7>[ 0.310016] with environment:
<7>[ 0.310022] HOME=/
<7>[ 0.310028] TERM=linux
<7>[ 0.310035] earlyprintk=pl011,0x1c090000
<7>[ 0.310041] lkmc_home=/lkmc
....

==== init arguments

The annoying dash `-` gets passed as a parameter to `init`, which makes it impossible to use this method for most non custom executables.

Arguments with dots that come after `-` are still treated specially (of the form `subsystem.somevalue`) and disappear, from args, e.g.:

....
./run --kernel-cli 'init=/lkmc/linux/init_env_poweroff.out' --kernel-cli-after-dash '/lkmc/linux/poweroff.out'
....

outputs:

....
args
/lkmc/linux/init_env_poweroff.out
-
ab
....

so see how `a.b` is gone.

The simple workaround is to just create a shell script that does it, e.g. as we've done at: link:rootfs_overlay/lkmc/gem5_exit.sh[].

==== init environment env

Wait, where do `HOME` and `TERM` come from? (greps the kernel). Ah, OK, the kernel sets those by default: https://github.com/torvalds/linux/blob/94710cac0ef4ee177a63b5227664b38c95bbf703/init/main.c#L173

....
const char *envp_init[MAX_INIT_ENVS+2] = { "HOME=/", "TERM=linux", NULL, };
....

==== BusyBox shell init environment

On top of the Linux kernel, the BusyBox `/bin/sh` shell will also define other variables.

We can explore the shenanigans that the shell adds on top of the Linux kernel with:

....
./run --kernel-cli 'init=/bin/sh'
....

From there we observe that:

....
env
....

gives:

....
SHLVL=1
HOME=/
TERM=linux
PWD=/
....

therefore adding `SHLVL` and `PWD` to the default kernel exported variables.

Furthermore, to increase confusion, if you list all non-exported shell variables https://askubuntu.com/questions/275965/how-to-list-all-variables-names-and-their-current-values with:

....
set
....

then it shows more variables, notably:

....
PATH='/sbin:/usr/sbin:/bin:/usr/bin'
....

===== BusyBox shell initrc files

Login shells source some default files, notably:

....
/etc/profile
$HOME/.profile
....

In our case, `HOME` is set to `/` presumably by `init` at: https://git.busybox.net/busybox/tree/init/init.c?id=5059653882dbd86e3bbf48389f9f81b0fac8cd0a#n1114

We provide `/.profile` from link:rootfs_overlay/.profile[], and use the default BusyBox `/etc/profile`.

The shell knows that it is a login shell if the first character of `argv[0]` is `-`, see also: https://stackoverflow.com/questions/2050961/is-argv0-name-of-executable-an-accepted-standard-or-just-a-common-conventi/42291142#42291142

When we use just `init=/bin/sh`, the Linux kernel sets `argv[0]` to `/bin/sh`, which does not start with `-`.

However, if you use `::respawn:-/bin/sh` on inttab described at <>, BusyBox' init sets `argv[0][0]` to `-`, and so does `getty`. This can be observed with:

....
cat /proc/$$/cmdline
....

where `$$` is the PID of the shell itself: https://stackoverflow.com/questions/21063765/get-pid-in-shell-bash

Bibliography: https://unix.stackexchange.com/questions/176027/ash-profile-configuration-file

== initrd

The kernel can boot from an CPIO file, which is a directory serialization format much like tar: https://superuser.com/questions/343915/tar-vs-cpio-what-is-the-difference

The bootloader, which for us is provided by QEMU itself, is then configured to put that CPIO into memory, and tell the kernel that it is there.

This is very similar to the kernel image itself, which already gets put into memory by the QEMU `-kernel` option.

With this setup, you don't even need to give a root filesystem to the kernel: it just does everything in memory in a ramfs.

To enable initrd instead of the default ext2 disk image, do:

....
./build-buildroot --initrd
./run --initrd
....

By looking at the QEMU run command generated, you can see that we didn't give the `-drive` option at all:

....
cat "$(./getvar run_dir)/run.sh"
....

Instead, we used the QEMU `-initrd` option to point to the `.cpio` filesystem that Buildroot generated for us.

Try removing that `-initrd` option to watch the kernel panic without rootfs at the end of boot.

When using `.cpio`, there can be no <> across boots, since all file operations happen in memory in a tmpfs:

....
date >f
poweroff
cat f
# can't open 'f': No such file or directory
....

which can be good for automated tests, as it ensures that you are using a pristine unmodified system image every time.

Not however that we already disable disk persistency by default on ext2 filesystems even without `--initrd`: xref:disk-persistency[xrefstyle=full].

One downside of this method is that it has to put the entire filesystem into memory, and could lead to a panic:

....
end Kernel panic - not syncing: Out of memory and no killable processes...
....

This can be solved by increasing the memory as explained at <>:

....
./run --initrd --memory 256M
....

The main ingredients to get initrd working are:

* `BR2_TARGET_ROOTFS_CPIO=y`: make Buildroot generate `images/rootfs.cpio` in addition to the other images.
+
It is also possible to compress that image with other options.
* `qemu -initrd`: make QEMU put the image into memory and tell the kernel about it.
* `CONFIG_BLK_DEV_INITRD=y`: Compile the kernel with initrd support, see also: https://unix.stackexchange.com/questions/67462/linux-kernel-is-not-finding-the-initrd-correctly/424496#424496
+
Buildroot forces that option when `BR2_TARGET_ROOTFS_CPIO=y` is given

TODO: how does the bootloader inform the kernel where to find initrd? https://unix.stackexchange.com/questions/89923/how-does-linux-load-the-initrd-image

=== initrd in desktop distros

Most modern desktop distributions have an initrd in their root disk to do early setup.

The rationale for this is described at: https://en.wikipedia.org/wiki/Initial_ramdisk

One obvious use case is having an encrypted root filesystem: you keep the initrd in an unencrypted partition, and then setup decryption from there.

I think GRUB then knows read common disk formats, and then loads that initrd to memory with a `/boot/grub/grub.cfg` directive of type:

....
initrd /initrd.img-4.4.0-108-generic
....

Related: https://stackoverflow.com/questions/6405083/initrd-and-booting-the-linux-kernel

=== initramfs

initramfs is just like <>, but you also glue the image directly to the kernel image itself using the kernel's build system.

Try it out with:

....
./build-buildroot --initramfs
./build-linux --initramfs
./run --initramfs
....

Notice how we had to rebuild the Linux kernel this time around as well after Buildroot, since in that build we will be gluing the CPIO to the kernel image.

Now, once again, if we look at the QEMU run command generated, we see all that QEMU needs is the `-kernel` option, no `-drive` not even `-initrd`! Pretty cool:

....
cat "$(./getvar run_dir)/run.sh"
....

It is also interesting to observe how this increases the size of the kernel image if you do a:

....
ls -lh "$(./getvar linux_image)"
....

before and after using initramfs, since the `.cpio` is now glued to the kernel image.

Don't forget that to stop using initramfs, you must rebuild the kernel without `--initramfs` to get rid of the attached CPIO image:

....
./build-linux
./run
....

Alternatively, consider using <> if you need to switch between initramfs and non initramfs often:

....
./build-buildroot --initramfs
./build-linux --initramfs --linux-build-id initramfs
./run --initramfs --linux-build-id
....

Setting up initramfs is very easy: our scripts just set `CONFIG_INITRAMFS_SOURCE` to point to the CPIO path.

http://nairobi-embedded.org/initramfs_tutorial.html shows a full manual setup.

=== rootfs

This is how `/proc/mounts` shows the root filesystem:

* hard disk: `/dev/root on / type ext2 (rw,relatime,block_validity,barrier,user_xattr)`. That file does not exist however.
* initrd: `rootfs on / type rootfs (rw)`
* initramfs: `rootfs on / type rootfs (rw)`

TODO: understand `/dev/root` better:

* https://unix.stackexchange.com/questions/295060/why-on-some-linux-systems-does-the-root-filesystem-appear-as-dev-root-instead
* https://superuser.com/questions/1213770/how-do-you-determine-the-root-device-if-dev-root-is-missing

==== /dev/root

See: xref:rootfs[xrefstyle=full]

=== gem5 initrd

TODO we were not able to get it working yet: https://stackoverflow.com/questions/49261801/how-to-boot-the-linux-kernel-with-initrd-or-initramfs-with-gem5

This would require gem5 to load the CPIO into memory, just like QEMU. Grepping `initrd` shows some ARM hits under:

....
src/arch/arm/linux/atag.hh
....

but they are commented out.

=== gem5 initramfs

This could in theory be easier to make work than initrd since the emulator does not have to do anything special.

However, it didn't: boot fails at the end because it does not see the initramfs, but rather tries to open our dummy root filesystem, which unsurprisingly does not have a format in a way that the kernel understands:

....
VFS: Cannot open root device "sda" or unknown-block(8,0): error -5
....

We think that this might be because gem5 boots directly `vmlinux`, and not from the final compressed images that contain the attached rootfs such as `bzImage`, which is what QEMU does, see also: xref:vmlinux-vs-bzimage-vs-zimage-vs-image[xrefstyle=full].

To do this failed test, we automatically pass a dummy disk image as of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91 since the scripts don't handle a missing `--disk-image` well, much like is currently done for <>.

Interestingly, using initramfs significantly slows down the gem5 boot, even though it did not work. For example, we've observed a 4x slowdown of as 17062a2e8b6e7888a14c3506e9415989362c58bf for aarch64. This must be because expanding the large attached CPIO must be expensive. We can clearly see from the kernel logs that the kernel just hangs at a point after the message `PCI: CLS 0 bytes, default 64` for a long time before proceeding further.

== Device tree

The device tree is a Linux kernel defined data structure that serves to inform the kernel how the hardware is setup.

Device trees serve to reduce the need for hardware vendors to patch the kernel: they just provide a device tree file instead, which is much simpler.

x86 does not use it device trees, but many other archs to, notably ARM.

This is notably because ARM boards:

* typically don't have discoverable hardware extensions like PCI, but rather just put everything on an SoC with magic register addresses
* are made by a wide variety of vendors due to ARM's licensing business model, which increases variability

The Linux kernel itself has several device trees under `./arch//boot/dts`, see also: https://stackoverflow.com/questions/21670967/how-to-compile-dts-linux-device-tree-source-files-to-dtb/42839737#42839737

=== DTB files

Files that contain device trees have the `.dtb` extension when compiled, and `.dts` when in text form.

You can convert between those formats with:

....
"$(./getvar buildroot_host_dir)"/bin/dtc -I dtb -O dts -o a.dts a.dtb
"$(./getvar buildroot_host_dir)"/bin/dtc -I dts -O dtb -o a.dtb a.dts
....

Buildroot builds the tool due to `BR2_PACKAGE_HOST_DTC=y`.

On Ubuntu 18.04, the package is named:

....
sudo apt-get install device-tree-compiler
....

See also: https://stackoverflow.com/questions/14000736/tool-to-visualize-the-device-tree-file-dtb-used-by-the-linux-kernel/39931834#39931834

Device tree files are provided to the emulator just like the root filesystem and the Linux kernel image.

In real hardware, those components are also often provided separately. For example, on the Raspberry Pi 2, the SD card must contain two partitions:

* the first contains all magic files, including the Linux kernel and the device tree
* the second contains the root filesystem

See also: https://stackoverflow.com/questions/29837892/how-to-run-a-c-program-with-no-os-on-the-raspberry-pi/40063032#40063032

=== Device tree syntax

Good format descriptions:

* https://www.raspberrypi.org/documentation/configuration/device-tree.md

Minimal example

....
/dts-v1/;

/ {
a;
};
....

Check correctness with:

....
dtc a.dts
....

Separate nodes are simply merged by node path, e.g.:

....
/dts-v1/;

/ {
a;
};

/ {
b;
};
....

then `dtc a.dts` gives:

....
/dts-v1/;

/ {
a;
b;
};
....

=== Get device tree from a running kernel

https://unix.stackexchange.com/questions/265890/is-it-possible-to-get-the-information-for-a-device-tree-using-sys-of-a-running/330926#330926

This is specially interesting because QEMU and gem5 are capable of generating DTBs that match the selected machine depending on dynamic command line parameters for some types of machines.

So observing the device tree from the guest allows to easily see what the emulator has generated.

Compile the `dtc` tool into the root filesystem:

....
./build-buildroot \
--arch aarch64 \
--config 'BR2_PACKAGE_DTC=y' \
--config 'BR2_PACKAGE_DTC_PROGRAMS=y' \
;
....

`-M virt` for example, which we use by default for `aarch64`, boots just fine without the `-dtb` option:

....
./run --arch aarch64
....

Then, from inside the guest:

....
dtc -I fs -O dts /sys/firmware/devicetree/base
....

contains:

....
cpus {
#address-cells = <0x1>;
#size-cells = <0x0>;

cpu@0 {
compatible = "arm,cortex-a57";
device_type = "cpu";
reg = <0x0>;
};
};
....

=== Device tree emulator generation

Since emulators know everything about the hardware, they can automatically generate device trees for us, which is very convenient.

This is the case for both QEMU and gem5.

For example, if we increase the <> to 2:

....
./run --arch aarch64 --cpus 2
....

QEMU automatically adds a second CPU to the DTB!

....
cpu@0 {
cpu@1 {
....

The action seems to be happening at: `hw/arm/virt.c`.

You can dump the DTB QEMU generated with:

....
./run --arch aarch64 -- -machine dumpdtb=dtb.dtb
....

as mentioned at: https://lists.gnu.org/archive/html/qemu-discuss/2017-02/msg00051.html

<> 2a9573f5942b5416fb0570cf5cb6cdecba733392 can also generate its own DTB.

gem5 can generate DTBs on ARM with `--generate-dtb`. The generated DTB is placed in the <> named as `system.dtb`.

== KVM

https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine[KVM] is Linux kernel interface that <> execution of virtual machines.

You can make QEMU or <> by passing enabling KVM with:

....
./run --kvm
....

KVM works by running userland instructions natively directly on the real hardware instead of running a software simulation of those instructions.

Therefore, KVM only works if you the host architecture is the same as the guest architecture. This means that this will likely only work for x86 guests since almost all development machines are x86 nowadays. Unless you are https://www.youtube.com/watch?v=8ItXpmLsINs[running an ARM desktop for some weird reason] :-)

We don't enable KVM by default because:

* it limits visibility, since more things are running natively:
** can't use <>
** can't do <>
** on gem5, you lose <> and therefor any notion of performance
* QEMU kernel boots are already <> for most purposes without it

One important use case for KVM is to fast forward gem5 execution, often to skip boot, take a <>, and then move on to a more detailed and slow simulation

=== KVM arm

TODO: we haven't gotten it to work yet, but it should be doable, and this is an outline of how to do it. Just don't expect this to tested very often for now.

We can test KVM on arm by running this repository inside an Ubuntu arm QEMU VM.

This produces no speedup of course, since the VM is already slow since it cannot use KVM on the x86 host.

First, obtain an Ubuntu arm64 virtual machine as explained at: https://askubuntu.com/questions/281763/is-there-any-prebuilt-qemu-ubuntu-image32bit-online/1081171#1081171

Then, from inside that image:

....
sudo apt-get install git
git clone https://github.com/cirosantilli/linux-kernel-module-cheat
cd linux-kernel-module-cheat
python3 -m venv .venv
. .venv/bin/activate
./setup -y
....

and then proceed exactly as in <>.

We don't want to build the full Buildroot image inside the VM as that would be way too slow, thus the recommendation for the prebuilt setup.

TODO: do the right thing and cross compile QEMU and gem5. gem5's Python parts might be a pain. QEMU should be easy: https://stackoverflow.com/questions/26514252/cross-compile-qemu-for-arm

=== gem5 KVM

While gem5 does have KVM, as of 2019 its support has not been very good, because debugging it is harder and people haven't focused intensively on it.

X86 was broken with pending patches: https://www.mail-archive.com/[email protected]/msg15046.html It failed immediately on:

....
panic: KVM: Failed to enter virtualized mode (hw reason: 0x80000021)
....

also mentioned at:

* https://stackoverflow.com/questions/62687463/gem5-kvm-doesnt-work-with-error-0x80000021
* https://gem5-users.gem5.narkive.com/8DBihuUx/running-fs-py-with-x86kvmcpu-failed

Bibliography:

* ARM thread: https://stackoverflow.com/questions/53523087/how-to-run-gem5-on-kvm-on-arm-with-multiple-cores

== User mode simulation

Both QEMU and gem5 have an user mode simulation mode in addition to full system simulation that we consider elsewhere in this project.

In QEMU, it is called just <>, and in gem5 it is called <>.

In both, the basic idea is the same.

User mode simulation takes regular userland executables of any arch as input and executes them directly, without booting a kernel.

Instead of simulating the full system, it translates normal instructions like in full system mode, but magically forwards system calls to the host OS.

Advantages over full system simulation:

* the simulation may <> since you don't have to simulate the Linux kernel and several device models
* you don't need to build your own kernel or root filesystem, which saves time. You still need a toolchain however, but the pre-packaged ones may work fine.

Disadvantages:

* lower guest to host portability:
** TODO confirm: host OS == guest OS?
** TODO confirm: the host Linux kernel should be newer than the kernel the executable was built for.
+
It may still work even if that is not the case, but could fail is a missing system call is reached.
+
The target Linux kernel of the executable is a GCC toolchain build-time configuration.
** emulator implementers have to keep up with libc changes, some of which break even a C hello world due setup code executed before main.
+
See also: xref:user-mode-simulation-with-glibc[xrefstyle=full]
* cannot be used to test the Linux kernel or any devices, and results are less representative of a real system since we are faking more

=== QEMU user mode getting started

Let's run link:userland/c/command_line_arguments.c[] built with the Buildroot toolchain on QEMU user mode:

....
./build user-mode-qemu
./run \
--userland userland/c/command_line_arguments.c \
--cli-args='asdf "qw er"' \
;
....

Output:

....
/path/to/linux-kernel-module-cheat/out/userland/default/x86_64/c/command_line_arguments.out
asdf
qw er
....

`./run --userland` path resolution is analogous to <>.

`./build user-mode-qemu` first builds Buildroot, and then runs `./build-userland`, which is further documented at: xref:userland-setup[xrefstyle=full]. It also builds QEMU. If you ahve already done a <> previously, this will be very fast.

If you modify the userland programs, rebuild simply with:

....
./build-userland
....

To rebuild just QEMU userland if you hack it, use:

....
./build-qemu --mode userland
....

The:

....
--mode userland
....

is needed because QEMU has two separate executables:

* `qemu-x86_64` for userland
* `qemu-system-x86_64` for full system

==== User mode GDB

It's nice when <> just works, right?

....
./run \
--arch aarch64 \
--gdb-wait \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
....

and on another shell:

....
./run-gdb \
--arch aarch64 \
--userland userland/c/command_line_arguments.c \
main \
;
....

Or alternatively, if you are using <>, do everything in one go with:

....
./run \
--arch aarch64 \
--gdb \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
....

To stop at the very first instruction of a freestanding program, just use `--no-continue`. A good example of this is shown at: xref:freestanding-programs[xrefstyle=full].

=== User mode tests

Automatically run all userland tests that can be run in user mode simulation, and check that they exit with status 0:

....
./build --all-archs test-executables-userland
./test-executables --all-archs --all-emulators
....

Or just for QEMU:

....
./build --all-archs test-executables-userland-qemu
./test-executables --all-archs --emulator qemu
....

Source: link:test-executables[]

This script skips a manually configured list of tests, notably:

* tests that depend on a full running kernel and cannot be run in user mode simulation, e.g. those that rely on kernel modules
* tests that require user interaction
* tests that take perceptible amounts of time
* known bugs we didn't have time to fix ;-)

Tests under link:userland/libs/[] are only run if `--package` or `--package-all` are given as described at <>.

The gem5 tests require building statically with build id `static`, see also: xref:gem5-syscall-emulation-mode[xrefstyle=full]. TODO automate this better.

See: xref:test-this-repo[xrefstyle=full] for more useful testing tips.

=== User mode Buildroot executables

If you followed <>, you can now run the executables created by Buildroot directly as:

....
./run \
--userland "$(./getvar buildroot_target_dir)/bin/echo" \
--cli-args='asdf' \
;
....

To easily explore the userland executable environment interactively, you can do:

....
./run \
--arch aarch64 \
--userland "$(./getvar --arch aarch64 buildroot_target_dir)/bin/sh" \
--terminal \
;
....

or:

....
./run \
--arch aarch64 \
--userland "$(./getvar --arch aarch64 buildroot_target_dir)/bin/sh" \
--cli-args='-c "uname -a && pwd"' \
;
....

Here is an interesting examples of this: xref:linux-test-project[xrefstyle=full]

=== User mode simulation with glibc

At 125d14805f769104f93c510bedaa685a52ec025d we <>, and caused some user mode pain, which we document here.

==== FATAL: kernel too old failure in userland simulation

glibc has a check for kernel version, likely obtained from the `uname` syscall, and if the kernel is not new enough, it quits.

Both gem5 and QEMU however allow setting the reported `uname` version from the command line for <>, which we do to always match our toolchain.

QEMU by default copies the host `uname` value, but we always override it in our scripts.

Determining the right number to use for the kernel version is of course highly non-trivial and would require an extensive userland test suite, which most emulators don't have.

....
./run --arch aarch64 --kernel-version 4.18 --userland userland/posix/uname.c
....

Source: link:userland/posix/uname.c[].

The QEMU source that does this is at: https://github.com/qemu/qemu/blob/v3.1.0/linux-user/syscall.c#L8931 The default ID is just hardcoded on the source.

Bibliography:

* https://stackoverflow.com/questions/48959349/how-to-solve-fatal-kernel-too-old-when-running-gem5-in-syscall-emulation-se-m
* https://stackoverflow.com/questions/53085048/how-to-compile-and-run-an-executable-in-gem5-syscall-emulation-mode-with-se-py/53085049#53085049
* https://gem5-review.googlesource.com/c/public/gem5/+/15855

==== stack smashing detected when using glibc

For some reason QEMU / glibc x86_64 picks up the host libc, which breaks things.

Other archs work as they different host libc is skipped. <> also work.

We have worked around this with with https://bugs.launchpad.net/qemu/+bug/1701798/comments/12 from the thread: https://bugs.launchpad.net/qemu/+bug/1701798 by creating the file: link:rootfs_overlay/etc/ld.so.cache[] which is a symlink to a file that cannot exist: `/dev/null/nonexistent`.

Reproduction:

....
rm -f "$(./getvar buildroot_target_dir)/etc/ld.so.cache"
./run --userland userland/c/hello.c
./run --userland userland/c/hello.c --qemu-which host
....

Outcome:

....
*** stack smashing detected ***: terminated
qemu: uncaught target signal 6 (Aborted) - core dumped
....

To get things working again, restore `ld.so.cache` with:

....
./build-buildroot
....

I've also tested on an Ubuntu 16.04 guest and the failure is different one:

....
qemu: uncaught target signal 4 (Illegal instruction) - core dumped
....

A non-QEMU-specific example of stack smashing is shown at: https://stackoverflow.com/questions/1345670/stack-smashing-detected/51897264#51897264

Tested at: 2e32389ebf1bedd89c682aa7b8fe42c3c0cf96e5 + 1.

=== User mode static executables

Example:

....
./build-userland \
--arch aarch64 \
--static \
;
./run \
--arch aarch64 \
--static \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
....

Running dynamically linked executables in QEMU requires pointing it to the root filesystem with the `-L` option so that it can find the dynamic linker and shared libraries, see also:

* https://stackoverflow.com/questions/54802670/using-dynamic-linker-with-qemu-arm/64551293#64551293
* https://stackoverflow.com/questions/khow-to-gdb-step-debug-a-dynamically-linked-executable-in-qemu-user-mode

We pass `-L` by default, so everything just works.

However, in case something goes wrong, you can also try statically linked executables, since this mechanism tends to be a bit more stable, for example:

* QEMU x86_64 guest on x86_64 host was failing with <>, but we found a workaround
* gem5 user only supported static executables in the past, as mentioned at: xref:gem5-syscall-emulation-mode[xrefstyle=full]

Running statically linked executables sometimes makes things break:

* <>
* TODO understand why:
+
....
./run --static --userland userland/c/file_write_read.c
....
+
fails our assertion that the data was read back correctly:
+
....
Assertion `strcmp(data, output) == 0' faile
....

==== User mode static executables with dynamic libraries

One limitation of static executables is that Buildroot mostly only builds dynamic versions of libraries (the libc is an exception).

So programs that rely on those libraries might not compile as GCC can't find the `.a` version of the library.

For example, if we try to build <> statically:

....
./build-userland --package openblas --static -- userland/libs/openblas/hello.c
....

it fails with:

....
ld: cannot find -lopenblas
....

[[cpp-static-and-pthreads]]
===== C++ static and pthreads

`g++` and pthreads also causes issues:

* https://stackoverflow.com/questions/35116327/when-g-static-link-pthread-cause-segmentation-fault-why
* https://stackoverflow.com/questions/58848694/gcc-whole-archive-recipe-for-static-linking-to-pthread-stopped-working-in-rec

As a consequence, the following just hangs as of LKMC ca0403849e03844a328029d70c08556155dc1cd0 + 1 the example link:userland/cpp/atomic/std_atomic.cpp[]:

....
./run --userland userland/cpp/atomic/std_atomic.cpp --static
....

And before that, it used to fail with other randomly different errors, e.g.:

....
qemu-x86_64: /path/to/linux-kernel-module-cheat/submodules/qemu/accel/tcg/cpu-exec.c:700: cpu_exec: Assertion `!have_mmap_lock()' failed.
qemu-x86_64: /path/to/linux-kernel-module-cheat/submodules/qemu/accel/tcg/cpu-exec.c:700: cpu_exec: Assertion `!have_mmap_lock()' failed.
....

And a native Ubuntu 18.04 AMD64 run with static compilation segfaults.

As of LKMC f5d4998ff51a548ed3f5153aacb0411d22022058 the aarch64 error:

....
./run --arch aarch64 --userland userland/cpp/atomic/fail.cpp --static
....

is:

....
terminate called after throwing an instance of 'std::system_error'
what(): Unknown error 16781344
qemu: uncaught target signal 6 (Aborted) - core dumped
....

The workaround:

....
-pthread -Wl,--whole-archive -lpthread -Wl,--no-whole-archive
....

fixes some of the problems, but not all TODO which were missing?, so we are just skipping those tests for now.

=== syscall emulation mode program stdin

The following work on both QEMU and gem5 as of LKMC 99d6bc6bc19d4c7f62b172643be95d9c43c26145 + 1. Interactive input:

....
./run --userland userland/c/getchar.c
....

Source: link:userland/c/getchar.c[]

A line of type should show:

....
enter a character:
....

and after pressing say `a` and Enter, we get:

....
you entered: a
....

Note however that due to <> we don't really see the initial `enter a character` line.

Non-interactive input from a file by forwarding emulators stdin implicitly through our Python scripts:

....
printf a > f.tmp
./run --userland userland/c/getchar.c < f.tmp
....

Input from a file by explicitly requesting our scripts to use it via the Python API:

....
printf a > f.tmp
./run --emulator gem5 --userland userland/c/getchar.c --stdin-file f.tmp
....

This is especially useful when running tests that require stdin input.

=== gem5 syscall emulation mode

Less robust than QEMU's, but still usable:

* https://stackoverflow.com/questions/48986597/when-should-you-use-full-system-fs-vs-syscall-emulation-se-with-userland-program

There are much more unimplemented syscalls in gem5 than in QEMU. Many of those are trivial to implement however.

So let's just play with some static ones:

....
./build-userland --arch aarch64
./run \
--arch aarch64 \
--emulator gem5 \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
....

TODO: how to escape spaces on the command line arguments?

<> also works normally on gem5:

....
./run \
--arch aarch64 \
--emulator gem5 \
--gdb-wait \
--userland userland/c/command_line_arguments.c \
--cli-args 'asdf "qw er"' \
;
./run-gdb \
--arch aarch64 \
--emulator gem5 \
--userland userland/c/command_line_arguments.c \
main \
;
....

==== gem5 dynamic linked executables in syscall emulation

Support for dynamic linking was added in November 2019:

* https://stackoverflow.com/questions/50542222/how-to-run-a-dynamically-linked-executable-syscall-emulation-mode-se-py-in-gem5/50696098#50696098
* https://stackoverflow.com/questions/64547306/cannot-open-lib-ld-linux-aarch64-so-1-in-qemu-or-gem5/64551313#64551313

Note that as shown at xref:benchmark-emulators-on-userland-executables[xrefstyle=full], the dynamic version runs 200x more instructions, which might have an impact on smaller simulations in detailed CPUs.

==== gem5 syscall emulation exit status

As of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91, the crappy `se.py` script does not forward the exit status of syscall emulation mode, you can test it with:

....
./run --dry-run --emulator gem5 --userland userland/c/false.c
....

Source: link:userland/c/false.c[].

Then manually run the generated gem5 CLI, and do:

....
echo $?
....

and the output is always `0`.

Instead, it just outputs a message to stdout just like for <>:

....
Simulated exit code not 0! Exit code is 1
....

which we parse in link:run[] and then exit with the correct result ourselves...

Related thread: https://stackoverflow.com/questions/56032347/is-there-a-way-to-identify-if-gem5-run-got-over-successfully

==== gem5 syscall emulation mode syscall tracing

Since gem5 has to implement syscalls itself in syscall emulation mode, it can of course clearly see which syscalls are being made, and we can log them for debug purposes with <>, e.g.:

....
./run \
--emulator gem5 \
--userland userland/arch/x86_64/freestanding/linux/hello.S \
--trace-stdout \
--trace ExecAll,SyscallBase,SyscallVerbose \
;
....

the trace as of f2eeceb1cde13a5ff740727526bf916b356cee38 + 1 contains:

....
0: system.cpu A0 T0 : @asm_main_after_prologue : mov rdi, 0x1
0: system.cpu A0 T0 : @asm_main_after_prologue.0 : MOV_R_I : limm rax, 0x1 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
1000: system.cpu A0 T0 : @asm_main_after_prologue+7 : mov rdi, 0x1
1000: system.cpu A0 T0 : @asm_main_after_prologue+7.0 : MOV_R_I : limm rdi, 0x1 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
2000: system.cpu A0 T0 : @asm_main_after_prologue+14 : lea rsi, DS:[rip + 0x19]
2000: system.cpu A0 T0 : @asm_main_after_prologue+14.0 : LEA_R_P : rdip t7, %ctrl153, : IntAlu : D=0x000000000040008d flags=(IsInteger|IsMicroop|IsDelayedCommit|IsFirstMicroop)
2500: system.cpu A0 T0 : @asm_main_after_prologue+14.1 : LEA_R_P : lea rsi, DS:[t7 + 0x19] : IntAlu : D=0x00000000004000a6 flags=(IsInteger|IsMicroop|IsLastMicroop)
3500: system.cpu A0 T0 : @asm_main_after_prologue+21 : mov rdi, 0x6
3500: system.cpu A0 T0 : @asm_main_after_prologue+21.0 : MOV_R_I : limm rdx, 0x6 : IntAlu : D=0x0000000000000006 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
4000: system.cpu: T0 : syscall write called w/arguments 1, 4194470, 6, 0, 0, 0
hello
4000: system.cpu: T0 : syscall write returns 6
4000: system.cpu A0 T0 : @asm_main_after_prologue+28 : syscall eax : IntAlu : flags=(IsInteger|IsSerializeAfter|IsNonSpeculative|IsSyscall)
5000: system.cpu A0 T0 : @asm_main_after_prologue+30 : mov rdi, 0x3c
5000: system.cpu A0 T0 : @asm_main_after_prologue+30.0 : MOV_R_I : limm rax, 0x3c : IntAlu : D=0x000000000000003c flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
6000: system.cpu A0 T0 : @asm_main_after_prologue+37 : mov rdi, 0
6000: system.cpu A0 T0 : @asm_main_after_prologue+37.0 : MOV_R_I : limm rdi, 0 : IntAlu : D=0x0000000000000000 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
6500: system.cpu: T0 : syscall exit called w/arguments 0, 4194470, 6, 0, 0, 0
6500: system.cpu: T0 : syscall exit returns 0
6500: system.cpu A0 T0 : @asm_main_after_prologue+44 : syscall eax : IntAlu : flags=(IsInteger|IsSerializeAfter|IsNonSpeculative|IsSyscall)
....

so we see that two syscall lines were added for each syscall, showing the syscall inputs and exit status, just like a mini `strace`!

==== gem5 syscall emulation multithreading

gem5 user mode multithreading has been particularly flaky compared <>, but work is being put into improving it.

In gem5 syscall simulation, the `fork` syscall checks if there is a free CPU, and if there is a free one, the new threads runs on that CPU.

Otherwise, the `fork` call, and therefore higher level interfaces to `fork` such as `pthread_create` also fail and return a failure return status in the guest.

For example, if we use just one CPU for link:userland/posix/pthread_self.c[] which spawns one thread besides `main`:

....
./run --cpus 1 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args 1
....

fails with this error message coming from the guest stderr:

....
pthread_create: Resource temporarily unavailable
....

It works however if we add on extra CPU:

....
./run --cpus 2 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args 1
....

Once threads exit, their CPU is freed and becomes available for new `fork` calls: For example, the following run spawns a thread, joins it, and then spawns again, and 2 CPUs are enough:

....
./run --cpus 2 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args '1 2'
....

because at each point in time, only up to two threads are running.

gem5 syscall emulation does show the expected number of cores when queried, e.g.:

....
./run --cpus 1 --userland userland/cpp/thread_hardware_concurrency.cpp --emulator gem5
./run --cpus 2 --userland userland/cpp/thread_hardware_concurrency.cpp --emulator gem5
....

outputs `1` and `2` respectively.

This can also be clearly by running `sched_getcpu`:

....
./run \
--arch aarch64 \
--cli-args 4 \
--cpus 8 \
--emulator gem5 \
--userland userland/linux/sched_getcpu.c \
;
....

which necessarily produces an output containing the CPU numbers from 1 to 4 and no higher:

....
1
3
4
2
....

TODO why does the `2` come at the end here? Would be good to do a detailed assembly run analysis.

==== gem5 syscall emulation multiple executables

gem5 syscall emulation has the nice feature of allowing you to run multiple executables "at once".

Each executable starts running on the next free core much as if it had been forked right at the start of simulation: <>.

This can be useful to quickly create deterministic multi-CPU workload.

`se.py --cmd` takes a semicolon separated list, so we could do which LKMC exposes this by taking `--userland` multiple times as in:

....
./run \
--arch aarch64 \
--cpus 2 \
--emulator gem5 \
--userland userland/posix/getpid.c \
--userland userland/posix/getpid.c \
;
....

We need at least one CPU per executable, just like when forking new processes.

The outcome of this is that we see two different `pid` messages printed to stdout:

....
pid=101
pid=100
....

since from <> we can see that se.py sets up one different PID per executable starting at 100:

....
workloads = options.cmd.split(';')
idx = 0
for wrkld in workloads:
process = Process(pid = 100 + idx)
....

We can also see that these processes are running concurrently with <> by hacking:

....
--debug-flags ExecAll \
--debug-file cout \
....

which starts with:

....
0: system.cpu1: A0 T0 : @__end__+274873647040 : add x0, sp, #0 : IntAlu : D=0x0000007ffffefde0 flags=(IsInteger)
0: system.cpu0: A0 T0 : @__end__+274873647040 : add x0, sp, #0 : IntAlu : D=0x0000007ffffefde0 flags=(IsInteger)
500: system.cpu0: A0 T0 : @__end__+274873647044 : bl <__end__+274873649648> : IntAlu : D=0x0000004000001008 flags=(IsInteger|IsControl|IsDirectControl|IsUncondControl|IsCall)
500: system.cpu1: A0 T0 : @__end__+274873647044 : bl <__end__+274873649648> : IntAlu : D=0x0000004000001008 flags=(IsInteger|IsControl|IsDirectControl|IsUncondControl|IsCall)
....

and therefore shows one instruction running on each CPU for each process at the same time.

===== gem5 syscall emulation --smt

gem5 b1623cb2087873f64197e503ab8894b5e4d4c7b4 syscall emulation has an `--smt` option presumably for <> but it has been neglected forever it seems: https://github.com/cirosantilli/linux-kernel-module-cheat/issues/104

If we start from the manually hacked working command from <> and try to add:

....
--cpu 1 --cpu-type Derivo3CPU --caches
....

We choose <> because of the se.py assert:

....
example/se.py:115: assert(options.cpu_type == "DerivO3CPU")
....

But then that fails with:

....
gem5.opt: /path/to/linux-kernel-module-cheat/out/gem5/master3/build/ARM/cpu/o3/cpu.cc:205: FullO3CPU::FullO3CPU(DerivO3CPUParams*) [with Impl = O3CPUImpl]: Assertion `params->numPhysVecPredRegs >= numThreads * TheISA::NumVecPredRegs' failed.
Program aborted at tick 0
....

=== QEMU user mode quirks

==== QEMU user mode does not show stdout immediately

At 8d8307ac0710164701f6e14c99a69ee172ccbb70 + 1, I noticed that if you run link:userland/posix/count.c[]:

....
./run --userland userland/posix/count_to.c --cli-args 3
....

it first waits for 3 seconds, then the program exits, and then it dumps all the stdout at once, instead of counting once every second as expected.

The same can be reproduced by copying the raw QEMU command and piping it through `tee`, so I don't think it is a bug in our setup:

....
/path/to/linux-kernel-module-cheat/out/qemu/default/x86_64-linux-user/qemu-x86_64 \
-L /path/to/linux-kernel-module-cheat/out/buildroot/build/default/x86_64/target \
/path/to/linux-kernel-module-cheat/out/userland/default/x86_64/posix/count.out \
3 \
| tee
....

TODO: investigate further and then possibly post on QEMU mailing list.

===== QEMU user mode does not show errors

Similarly to <>, QEMU error messages do not show at all through pipes.

In particular, it does not say anything if you pass it a non-existing executable:

....
qemu-x86_64 asdf | cat
....

So we just check ourselves manually

== Kernel module utilities

=== insmod

https://git.busybox.net/busybox/tree/modutils/insmod.c?h=1_29_3[Provided by BusyBox]:

....
./run --eval-after 'insmod hello.ko'
....

=== myinsmod

If you are feeling raw, you can insert and remove modules with our own minimal module inserter and remover!

....
# init_module
./linux/myinsmod.out hello.ko
# finit_module
./linux/myinsmod.out hello.ko "" 1
./linux/myrmmod.out hello
....

which teaches you how it is done from C code.

Source:

* link:userland/linux/myinsmod.c[]
* link:userland/linux/myrmmod.c[]

The Linux kernel offers two system calls for module insertion:

* `init_module`
* `finit_module`

and:

....
man init_module
....

documents that:

____
The finit_module() system call is like init_module(), but reads the module to be loaded from the file descriptor fd. It is useful when the authenticity of a kernel module can be determined from its location in the filesystem; in cases where that is possible, the overhead of using cryptographically signed modules to determine the authenticity of a module can be avoided. The param_values argument is as for init_module().
____

`finit` is newer and was added only in v3.8. More rationale: https://lwn.net/Articles/519010/

Bibliography: https://stackoverflow.com/questions/5947286/how-to-load-linux-kernel-modules-from-c-code

=== modprobe

Implemented as a BusyBox applet by default: https://git.busybox.net/busybox/tree/modutils/modprobe.c?h=1_29_stable

`modprobe` searches for modules installed under:

....
ls /lib/modules/
....

and specified in the `modules.order` file.

This is the default install path for `CONFIG_SOME_MOD=m` modules built with `make modules_install` in the Linux kernel tree, with root path given by `INSTALL_MOD_PATH`, and therefore canonical in that sense.

Currently, there are only two kinds of kernel modules that you can try out with `modprobe`:

* modules built with Buildroot, see: xref:kernel-modules-buildroot-package[xrefstyle=full]
* modules built from the kernel tree itself, see: xref:dummy-irq[xrefstyle=full]

We are not installing out custom `./build-modules` modules there, because:

* we don't know the right way. Why is there no `install` or `install_modules` target for kernel modules?
+
This can of course be solved by running Buildroot in verbose mode, and copying whatever it is doing, initial exploration at: https://stackoverflow.com/questions/22783793/how-to-install-kernel-modules-from-source-code-error-while-make-process/53169078#53169078
* we would have to think how to not have to include the kernel modules twice in the root filesystem, but still have <<9p>> working for fast development as described at: xref:your-first-kernel-module-hack[xrefstyle=full]

=== kmod

The more "reference" kernel.org implementation of `lsmod`, `insmod`, `rmmod`, etc.: https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git

Default implementation on desktop distros such as Ubuntu 16.04, where e.g.:

....
ls -l /bin/lsmod
....

gives:

....
lrwxrwxrwx 1 root root 4 Jul 25 15:35 /bin/lsmod -> kmod
....

and:

....
dpkg -l | grep -Ei
....

contains:

....
ii kmod 22-1ubuntu5 amd64 tools for managing Linux kernel modules
....

BusyBox also implements its own version of those executables, see e.g. <>. Here we will only describe features that differ from kmod to the BusyBox implementation.

==== module-init-tools

Name of a predecessor set of tools.

==== kmod modprobe

kmod's `modprobe` can also load modules under different names to avoid conflicts, e.g.:

....
sudo modprobe vmhgfs -o vm_hgfs
....

== Filesystems

=== OverlayFS

https://en.wikipedia.org/wiki/OverlayFS[OverlayFS] is a filesystem merged in the Linux kernel in 3.18.

As the name suggests, OverlayFS allows you to merge multiple directories into one. The following minimal runnable examples should give you an intuition on how it works:

* https://askubuntu.com/questions/109413/how-do-i-use-overlayfs/1075564#1075564
* https://stackoverflow.com/questions/31044982/how-to-use-multiple-lower-layers-in-overlayfs/52792397#52792397

We are very interested in this filesystem because we are looking for a way to make host cross compiled executables appear on the guest root `/` without reboot.

This would have several advantages:

* makes it faster to test modified guest programs
** not rebooting is fundamental for <>, where the reboot is very costly.
** no need to regenerate the root filesystem at all and reboot
** overcomes the `check_bin_arch` problem as shown at: xref:rpath[xrefstyle=full]
* we could keep the base root filesystem very small, which implies:
** less host disk usage, no need to copy the entire `./getvar out_rootfs_overlay_dir` to the image again
** no need to worry about <>

We can already make host files appear on the guest with <<9p>>, but they appear on a subdirectory instead of the root.

If they would appear on the root instead, that would be even more awesome, because you would just use the exact same paths relative to the root transparently.

For example, we wouldn't have to mess around with variables such as `PATH` and `LD_LIBRARY_PATH`.

The idea is to:

* 9P mount our overlay directory `./getvar out_rootfs_overlay_dir` on the guest, which we already do at `/mnt/9p/out_rootfs_overlay`
* then create an overlay with that directory and the root, and `chroot` into it.
+
I was unable to mount directly to `/` avoid the `chroot`:
** https://stackoverflow.com/questions/41119656/how-can-i-overlayfs-the-root-filesystem-on-linux
** https://unix.stackexchange.com/questions/316018/how-to-use-overlayfs-to-protect-the-root-filesystem
** https://unix.stackexchange.com/questions/420646/mount-root-as-overlayfs

We already have a prototype of this running from `fstab` on guest at `/mnt/overlay`, but it has the following shortcomings:

* changes to underlying filesystems are not visible on the overlay unless you remount with `mount -r remount /mnt/overlay`, as mentioned https://github.com/torvalds/linux/blob/v4.18/Documentation/filesystems/overlayfs.txt#L332[on the kernel docs]:
+
....
Changes to the underlying filesystems while part of a mounted overlay
filesystem are not allowed. If the underlying filesystem is changed,
the behavior of the overlay is undefined, though it will not result in
a crash or deadlock.
....
+
This makes everything very inconvenient if you are inside `chroot` action. You would have to leave `chroot`, remount, then come back.
* the overlay does not contain sub-filesystems, e.g. `/proc`. We would have to re-mount them. But should be doable with some automation.

Even more awesome than `chroot` would be to `pivot_root`, but I couldn't get that working either:

* https://stackoverflow.com/questions/28015688/pivot-root-device-or-resource-busy
* https://unix.stackexchange.com/questions/179788/pivot-root-device-or-resource-busy

=== Secondary disk

A simpler and possibly less overhead alternative to <<9P>> would be to generate a secondary disk image with the benchmark you want to rebuild.

Then you can `umount` and re-mount on guest without reboot.

To build the secondary disk image run link:build-disk2[]:

....
./build-disk2
....

This will put the entire <> into a squashfs filesystem.

Then, if that filesystem is present, `./run` will automatically pass it as the second disk on the command line.

For example, from inside QEMU, you can mount that disk with:

....
mkdir /mnt/vdb
mount /dev/vdb /mnt/vdb
/mnt/vdb/lkmc/c/hello.out
....

To update the secondary disk while a simulation is running to avoid rebooting, first unmount in the guest:

....
umount /mnt/vdb
....

and then on the host:

....
# Edit the file.
vim userland/c/hello.c
./build-userland
./build-disk2
....

and now you can re-run the updated version of the executable on the guest after remounting it.

gem5 fs.py support for multiple disks is discussed at: https://stackoverflow.com/questions/50862906/how-to-attach-multiple-disk-images-in-a-simulation-with-gem5-fs-py/51037661#51037661

== Graphics

Both QEMU and gem5 are capable of outputting graphics to the screen, and taking mouse and keyboard input.

https://unix.stackexchange.com/questions/307390/what-is-the-difference-between-ttys0-ttyusb0-and-ttyama0-in-linux

=== QEMU text mode

Text mode is the default mode for QEMU.

The opposite of text mode is <>

In text mode, we just show the serial console directly on the current terminal, without opening a QEMU GUI window.

You cannot see any graphics from text mode, but text operations in this mode, including:

* scrolling up: xref:scroll-up-in-graphic-mode[xrefstyle=full]
* copy paste to and from the terminal

making this a good default, unless you really need to use with graphics.

Text mode works by sending the terminal character by character to a serial device.

This is different from a display screen, where each character is a bunch of pixels, and it would be much harder to convert that into actual terminal text.

For more details, see:

* https://unix.stackexchange.com/questions/307390/what-is-the-difference-between-ttys0-ttyusb0-and-ttyama0-in-linux
* <>

Note that you can still see an image even in text mode with the VNC:

....
./run --vnc
....

and on another terminal:

....
./vnc
....

but there is not terminal on the VNC window, just the <> penguin.

==== Quit QEMU from text mode

https://superuser.com/questions/1087859/how-to-quit-the-qemu-monitor-when-not-using-a-gui

However, our QEMU setup captures Ctrl + C and other common signals and sends them to the guest, which makes it hard to quit QEMU for the first time since there is no GUI either.

The simplest way to quit QEMU, is to do:

....
Ctrl-A X
....

Alternative methods include:

* `quit` command on the <>
* `pkill qemu`

=== QEMU graphic mode

Enable graphic mode with:

....
./run --graphic
....

Outcome: you see a penguin due to <>.

For a more exciting GUI experience, see: xref:x11[xrefstyle=full]

Text mode is the default due to the following considerable advantages:

* copy and paste commands and stdout output to / from host
* get full panic traces when you start making the kernel crash :-) See also: https://unix.stackexchange.com/questions/208260/how-to-scroll-up-after-a-kernel-panic
* have a large scroll buffer, and be able to search it, e.g. by using tmux on host
* one less window floating around to think about in addition to your shell :-)
* graphics mode has only been properly tested on `x86_64`.

Text mode has the following limitations over graphics mode:

* you can't see graphics such as those produced by <>
* very early kernel messages such as `early console in extract_kernel` only show on the GUI, since at such early stages, not even the serial has been setup.

`x86_64` has a VGA device enabled by default, as can be seen as:

....
./qemu-monitor info qtree
....

and the Linux kernel picks it up through the https://en.wikipedia.org/wiki/Linux_framebuffer[fbdev] graphics system as can be seen from:

....
cat /dev/urandom > /dev/fb0
....

flooding the screen with colors. See also: https://superuser.com/questions/223094/how-do-i-know-if-i-have-kms-enabled

==== Scroll up in graphic mode

Scroll up in <>:

....
Shift-PgUp
....

but I never managed to increase that buffer:

* https://askubuntu.com/questions/709697/how-to-increase-scrollback-lines-in-ubuntu14-04-2-server-edition
* https://unix.stackexchange.com/questions/346018/how-to-increase-the-scrollback-buffer-size-for-tty

The superior alternative is to use text mode and GNU screen or <>.

==== QEMU Graphic mode arm

===== QEMU graphic mode arm terminal

TODO: on arm, we see the penguin and some boot messages, but don't get a shell at then end:

....
./run --arch aarch64 --graphic
....

I think it does not work because the graphic window is <> only, i.e.:

....
cat /dev/urandom > /dev/fb0
....

fails with:

....
cat: write error: No space left on device
....

and has no effect, and the Linux kernel does not appear to have a built-in DRM console as it does for fbdev with <>.

There is however one out-of-tree implementation: <>.

===== QEMU graphic mode arm terminal implementation

`arm` and `aarch64` rely on the QEMU CLI option:

....
-device virtio-gpu-pci
....

and the kernel config options:

....
CONFIG_DRM=y
CONFIG_DRM_VIRTIO_GPU=y
....

Unlike x86, `arm` and `aarch64` don't have a display device attached by default, thus the need for `virtio-gpu-pci`.

See also https://wiki.qemu.org/Documentation/Platforms/ARM (recently edited and corrected by yours truly... :-)).

===== QEMU graphic mode arm VGA

TODO: how to use VGA on ARM? https://stackoverflow.com/questions/20811203/how-can-i-output-to-vga-through-qemu-arm Tried:

....
-device VGA
....

But https://github.com/qemu/qemu/blob/v2.12.0/docs/config/mach-virt-graphical.cfg#L264 says:

....
# We use virtio-gpu because the legacy VGA framebuffer is
# very troublesome on aarch64, and virtio-gpu is the only
# video device that doesn't implement it.
....

so maybe it is not possible?

=== gem5 graphic mode

gem5 does not have a "text mode", since it cannot redirect the Linux terminal to same host terminal where the executable is running: you are always forced to connect to the terminal with `gem-shell`.

TODO could not get it working on `x86_64`, only ARM.

Overview: https://stackoverflow.com/questions/50364863/how-to-get-graphical-gui-output-and-user-touch-keyboard-mouse-input-in-a-ful/50364864#50364864

More concretely, first build the kernel with the <>, and then run:

....
./build-linux \
--arch arm \
--custom-config-file-gem5 \
--linux-build-id gem5-v4.15 \
;
./run --arch arm --emulator gem5 --linux-build-id gem5-v4.15
....

and then on another shell:

....
vinagre localhost:5900
....

The <> penguin only appears after several seconds, together with kernel messages of type:

....
[ 0.152755] [drm] found ARM HDLCD version r0p0
[ 0.152790] hdlcd 2b000000.hdlcd: bound virt-encoder (ops 0x80935f94)
[ 0.152795] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.152799] [drm] No driver support for vblank timestamp query.
[ 0.215179] Console: switching to colour frame buffer device 240x67
[ 0.230389] hdlcd 2b000000.hdlcd: fb0: frame buffer device
[ 0.230509] [drm] Initialized hdlcd 1.0.0 20151021 for 2b000000.hdlcd on minor 0
....

The port `5900` is incremented by one if you already have something running on that port, `gem5` stdout tells us the right port on stdout as:

....
system.vncserver: Listening for connections on port 5900
....

and when we connect it shows a message:

....
info: VNC client attached
....

Alternatively, you can also dump each new frame to an image file with `--frame-capture`:

....
./run \
--arch arm \
--emulator gem5 \
--linux-build-id gem5-v4.15 \
-- --frame-capture \
;
....

This creates on compressed PNG whenever the screen image changes inside the <> with filename of type:

....
frames_system.vncserver/fb...png.gz
....

It is fun to see how we get one new frame whenever the white underscore cursor appears and reappears under the penguin!

The last frame is always available uncompressed at: `system.framebuffer.png`.

TODO <> failed on `aarch64` with:

....
kmscube[706]: unhandled level 2 translation fault (11) at 0x00000000, esr 0x92000006, in libgbm.so.1.0.0[7fbf6a6000+e000]
....

Tested on: https://github.com/cirosantilli/linux-kernel-module-cheat/commit/38fd6153d965ba20145f53dc1bb3ba34b336bde9[38fd6153d965ba20145f53dc1bb3ba34b336bde9]

==== Graphic mode gem5 aarch64

For `aarch64` we also need to configure the kernel with link:linux_config/display[]:

....
git -C "$(./getvar linux_source_dir)" fetch https://gem5.googlesource.com/arm/linux gem5/v4.15:gem5/v4.15
git -C "$(./getvar linux_source_dir)" checkout gem5/v4.15
./build-linux \
--arch aarch64 \
--config-fragment linux_config/display \
--custom-config-file-gem5 \
--linux-build-id gem5-v4.15 \
;
git -C "$(./getvar linux_source_dir)" checkout -
./run --arch aarch64 --emulator gem5 --linux-build-id gem5-v4.15
....

This is because the gem5 `aarch64` defconfig does not enable HDLCD like the 32 bit one `arm` one for some reason.

==== gem5 graphic mode DP650

TODO get working. There is an unmerged patchset at: https://gem5-review.googlesource.com/c/public/gem5/+/11036/1

The DP650 is a newer display hardware than HDLCD. TODO is its interface publicly documented anywhere? Since it has a gem5 model and https://github.com/torvalds/linux/blob/v4.19/drivers/gpu/drm/arm/Kconfig#L39[in-tree Linux kernel support], that information cannot be secret?

The key option to enable support in Linux is `DRM_MALI_DISPLAY=y` which we enable at link:linux_config/display[].

Build the kernel exactly as for <> and then run with:

....
./run --arch aarch64 --dp650 --emulator gem5 --linux-build-id gem5-v4.15
....

==== gem5 graphic mode internals

We cannot use mainline Linux because the <> are required at least to provide the `CONFIG_DRM_VIRT_ENCODER` option.

gem5 emulates the http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0541c/CHDBAIDI.html[HDLCD] ARM Holdings hardware for `arm` and `aarch64`.

The kernel uses HDLCD to implement the <> interface, the required kernel config options are present at: link:linux_config/display[].

TODO: minimize out the `--custom-config-file`. If we just remove it on `arm`: it does not work with a failing dmesg:

....
[ 0.066208] [drm] found ARM HDLCD version r0p0
[ 0.066241] hdlcd 2b000000.hdlcd: bound virt-encoder (ops drm_vencoder_ops)
[ 0.066247] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.066252] [drm] No driver support for vblank timestamp query.
[ 0.066276] hdlcd 2b000000.hdlcd: Cannot do DMA to address 0x0000000000000000
[ 0.066281] swiotlb: coherent allocation failed for device 2b000000.hdlcd size=8294400
[ 0.066288] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.15.0 #1
[ 0.066293] Hardware name: V2P-AARCH64 (DT)
[ 0.066296] Call trace:
[ 0.066301] dump_backtrace+0x0/0x1b0
[ 0.066306] show_stack+0x24/0x30
[ 0.066311] dump_stack+0xb8/0xf0
[ 0.066316] swiotlb_alloc_coherent+0x17c/0x190
[ 0.066321] __dma_alloc+0x68/0x160
[ 0.066325] drm_gem_cma_create+0x98/0x120
[ 0.066330] drm_fbdev_cma_create+0x74/0x2e0
[ 0.066335] __drm_fb_helper_initial_config_and_unlock+0x1d8/0x3a0
[ 0.066341] drm_fb_helper_initial_config+0x4c/0x58
[ 0.066347] drm_fbdev_cma_init_with_funcs+0x98/0x148
[ 0.066352] drm_fbdev_cma_init+0x40/0x50
[ 0.066357] hdlcd_drm_bind+0x220/0x428
[ 0.066362] try_to_bring_up_master+0x21c/0x2b8
[ 0.066367] component_master_add_with_match+0xa8/0xf0
[ 0.066372] hdlcd_probe+0x60/0x78
[ 0.066377] platform_drv_probe+0x60/0xc8
[ 0.066382] driver_probe_device+0x30c/0x478
[ 0.066388] __driver_attach+0x10c/0x128
[ 0.066393] bus_for_each_dev+0x70/0xb0
[ 0.066398] driver_attach+0x30/0x40
[ 0.066402] bus_add_driver+0x1d0/0x298
[ 0.066408] driver_register+0x68/0x100
[ 0.066413] __platform_driver_register+0x54/0x60
[ 0.066418] hdlcd_platform_driver_init+0x20/0x28
[ 0.066424] do_one_initcall+0x44/0x130
[ 0.066428] kernel_init_freeable+0x13c/0x1d8
[ 0.066433] kernel_init+0x18/0x108
[ 0.066438] ret_from_fork+0x10/0x1c
[ 0.066444] hdlcd 2b000000.hdlcd: Failed to set initial hw configuration.
[ 0.066470] hdlcd 2b000000.hdlcd: master bind failed: -12
[ 0.066477] hdlcd: probe of 2b000000.hdlcd failed with error -12
....

So what other options are missing from `gem5_defconfig`? It would be cool to minimize it out to better understand the options.

[[x11]]
=== X11 Buildroot

Once you've seen the `CONFIG_LOGO` penguin as a sanity check, you can try to go for a cooler X11 Buildroot setup.

Build and run:

....
./build-buildroot --config-fragment buildroot_config/x11
./run --graphic
....

Inside QEMU:

....
startx
....

And then from the GUI you can start exciting graphical programs such as:

....
xcalc
xeyes
....

Outcome: xref:image-x11[xrefstyle=full]

[[image-x11]]
.X11 Buildroot graphical user interface screenshot
[link=x11.png]
image::x11.png[]

We don't build X11 by default because it takes a considerable amount of time (about 20%), and is not expected to be used by most users: you need to pass the `-x` flag to enable it.

More details: https://unix.stackexchange.com/questions/70931/how-to-install-x11-on-my-own-linux-buildroot-system/306116#306116

Not sure how well that graphics stack represents real systems, but if it does it would be a good way to understand how it works.

To x11 packages have an `xserver` prefix as in:

....
./build-buildroot --config-fragment buildroot_config/x11 -- xserver_xorg-server-reconfigure
....

the easiest way to find them out is to just list `"$(./getvar buildroot_build_build_dir)/x*`.

TODO as of: c2696c978d6ca88e8b8599c92b1beeda80eb62b2 I noticed that `startx` leads to a <>:

....
[ 2.809104] WARNING: CPU: 0 PID: 51 at drivers/gpu/drm/ttm/ttm_bo_vm.c:304 ttm_bo_vm_open+0x37/0x40
....

==== X11 Buildroot mouse not moving

TODO 9076c1d9bcc13b6efdb8ef502274f846d8d4e6a1 I'm 100% sure that it was working before, but I didn't run it forever, and it stopped working at some point. Needs bisection, on whatever commit last touched x11 stuff.

* https://askubuntu.com/questions/730891/how-can-i-get-a-mouse-cursor-in-qemu
* https://stackoverflow.com/questions/19665412/mouse-and-keyboard-not-working-in-qemu-emulator

`-show-cursor` did not help, I just get to see the host cursor, but the guest cursor still does not move.

Doing:

....
watch -n 1 grep i8042 /proc/interrupts
....

shows that interrupts do happen when mouse and keyboard presses are done, so I expect that it is some wrong either with:

* QEMU. Same behaviour if I try the host's QEMU 2.10.1 however.
* X11 configuration. We do have `BR2_PACKAGE_XDRIVER_XF86_INPUT_MOUSE=y`.

`/var/log/Xorg.0.log` contains the following interesting lines:

....
[ 27.549] (II) LoadModule: "mouse"
[ 27.549] (II) Loading /usr/lib/xorg/modules/input/mouse_drv.so
[ 27.590] (EE) : Cannot find which device to use.
[ 27.590] (EE) : cannot open input device
[ 27.590] (EE) PreInit returned 2 for ""
[ 27.590] (II) UnloadModule: "mouse"
....

The file `/dev/inputs/mice` does not exist.

Note that our current link:kernel_confi_fragment sets:

....
# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
....

for gem5, so you might want to remove those lines to debug this.

==== X11 Buildroot ARM

On ARM, `startx` hangs at a message:

....
vgaarb: this pci device is not a vga device
....

and nothing shows on the screen, and:

....
grep EE /var/log/Xorg.0.log
....

says:

....
(EE) Failed to load module "modesetting" (module does not exist, 0)
....

A friend told me this but I haven't tried it yet:

* `xf86-video-modesetting` is likely the missing ingredient, but it does not seem possible to activate it from Buildroot currently without patching things.
* `xf86-video-fbdev` should work as well, but we need to make sure fbdev is enabled, and maybe add some line to the `Xorg.conf`

== Networking

=== Enable networking

We disable networking by default because it starts an userland process, and we want to keep the number of userland processes to a minimum to make the system more understandable as explained at: xref:resource-tradeoff-guidelines[xrefstyle=full]

To enable networking on Buildroot, simply run:

....
ifup -a
....

That command goes over all (`-a`) the interfaces in `/etc/network/interfaces` and brings them up.

Then test it with:

....
wget google.com
cat index.html
....

Disable networking with:

....
ifdown -a
....

To enable networking by default after boot, use the methods documented at <>.

=== ping

`ping` does not work within QEMU by default, e.g.:

....
ping google.com
....

hangs after printing the header:

....
PING google.com (216.58.204.46): 56 data bytes
....

Here Ciro describes how to get it working: https://unix.stackexchange.com/questions/473448/how-to-ping-from-the-qemu-guest-to-an-external-url

Further bibliography: https://superuser.com/questions/787400/qemu-user-mode-networking-doesnt-work

=== Guest host networking

In this section we discuss how to interact between the guest and the host through networking.

First ensure that you can access the external network since that is easier to get working, see: xref:networking[xrefstyle=full].

==== Host to guest networking

===== nc host to guest

With `nc` we can create the most minimal example possible as a sanity check.

On guest run:

....
nc -l -p 45455
....

Then on host run:

....
echo asdf | nc localhost 45455
....

`asdf` appears on the guest.

This uses:

* BusyBox' `nc` utility, which is enabled with `CONFIG_NC=y`
* `nc` from the `netcat-openbsd` package on an Ubuntu 18.04 host

Only this specific port works by default since we have forwarded it on the QEMU command line.

We us this exact procedure to connect to <>.

===== ssh into guest

Not enabled by default due to the build / runtime overhead. To enable, build with:

....
./build-buildroot --config 'BR2_PACKAGE_OPENSSH=y'
....

Then inside the guest turn on sshd:

....
./sshd.sh
....

Source: link:rootfs_overlay/lkmc/sshd.sh[]

And finally on host:

....
ssh root@localhost -p 45456
....

Bibliography: https://unix.stackexchange.com/questions/124681/how-to-ssh-from-host-to-guest-using-qemu/307557#307557

===== gem5 host to guest networking

Could not do port forwarding from host to guest, and therefore could not use `gdbserver`: https://stackoverflow.com/questions/48941494/how-to-do-port-forwarding-from-guest-to-host-in-gem5

==== Guest to host networking

First <>.

Then in the host, start a server:

....
python -m SimpleHTTPServer 8000
....

And then in the guest, find the IP we need to hit with:

....
ip rounte
....

which gives:

.....
default via 10.0.2.2 dev eth0
10.0.2.0/24 dev eth0 scope link src 10.0.2.15
.....

so we use in the guest:

....
wget 10.0.2.2:8000
....

Bibliography:

* https://serverfault.com/questions/769874/how-to-forward-a-port-from-guest-to-host-in-qemu-kvm/951835#951835
* https://unix.stackexchange.com/questions/78953/qemu-how-to-ping-host-network/547698#547698

=== 9P

The https://en.wikipedia.org/wiki/9P_(protocol)[9p protocol] allows the guest to mount a host directory.

Both QEMU and <> support 9P.

==== 9P vs NFS

All of 9P and NFS (and sshfs) allow sharing directories between guest and host.

Advantages of 9P

* requires `sudo` on the host to mount
* we could share a guest directory to the host, but this would require running a server on the guest, which adds <>
+
Furthermore, this would be inconvenient, since what we usually want to do is to share host cross built files with the guest, and to do that we would have to copy the files over after the guest starts the server.
* QEMU implements 9P natively, which makes it very stable and convenient, and must mean it is a simpler protocol than NFS as one would expect.
+
This is not the case for gem5 7bfb7f3a43f382eb49853f47b140bfd6caad0fb8 unfortunately, which relies on the https://github.com/chaos/diod[diod] host daemon, although it is not unfeasible that future versions could implement it natively as well.

Advantages of NFS:

* way more widely used and therefore stable and available, not to mention that it also works on real hardware.
* the name does not start with a digit, which is an invalid identifier in all programming languages known to man. Who in their right mind would call a software project as such? It does not even match the natural order of Plan 9; Plan then 9: P9!

==== 9P getting started

As usual, we have already set everything up for you. On host:

....
cd "$(./getvar p9_dir)"
uname -a > host
....

Guest:

....
cd /mnt/9p/data
cat host
uname -a > guest
....

Host:

....
cat guest
....

The main ingredients for this are:

* `9P` settings in our <>
* `9p` entry on our link:rootfs_overlay/etc/fstab[]
+
Alternatively, you could also mount your own with:
+
....
mkdir /mnt/my9p
mount -t 9p -o trans=virtio,version=9p2000.L host0 /mnt/my9p
....
+
where mount tag `host0` is set by the emulator (`mount_tag` flag on QEMU CLI), and can be found in the guest with: `cat /sys/bus/virtio/drivers/9pnet_virtio/virtio0/mount_tag` as documented at: https://www.kernel.org/doc/Documentation/filesystems/9p.txt[].
* Launch QEMU with `-virtfs` as in your link:run[] script
+
When we tried:
+
....
security_model=mapped
....
+
writes from guest failed due to user mismatch problems: https://serverfault.com/questions/342801/read-write-access-for-passthrough-9p-filesystems-with-libvirt-qemu

Bibliography:

* https://superuser.com/questions/628169/how-to-share-a-directory-with-the-host-without-networking-in-qemu
* https://wiki.qemu.org/Documentation/9psetup

==== gem5 9P

Is possible on aarch64 as shown at: https://gem5-review.googlesource.com/c/public/gem5/+/22831[], and it is just a matter of exposing to X86 for those that want it.

Enable it by passing the `--vio-9p` option on the fs.py gem5 command line:

....
./run --arch aarch64 --emulator gem5 -- --vio-9p
....

Then on the guest:

....
mkdir -p /mnt/9p/gem5
mount -t 9p -o trans=virtio,version=9p2000.L,aname=/path/to/linux-kernel-module-cheat/out/run/gem5/aarch64/0/m5out/9p/share gem5 /mnt/9p/gem5
echo asdf > /mnt/9p/gem5/qwer
....

Yes, you have to pass the full path to the directory on the host. Yes, this is horrible.

The shared directory is:

....
out/run/gem5/aarch64/0/m5out/9p/share
....

so we can observe the file the guest wrote from the host with:

....
out/run/gem5/aarch64/0/m5out/9p/share/qwer
....

and vice versa:

....
echo zxvc > out/run/gem5/aarch64/0/m5out/9p/share/qwer
....

is now visible from the guest:

....
cat /mnt/9p/gem5/qwer
....

Checkpoint restore with an open mount will likely fail because gem5 uses an ugly external executable to implement diod. The protocol is not very complex, and QEMU implements it in-tree, which is what gem5 should do as well at some point.

Also checkpoint without `--vio-9p` and restore with `--vio-9p` did not work either, the mount fails.

However, this did work, on guest:

....
unmount /mnt/9p/gem5
m5 checkpoint
....

then restore with the detalied CPU of interest e.g.

....
./run --arch aarch64 --emulator gem5 -- --vio-9p --cpu-type DerivO3CPU --caches
....

Tested on gem5 b2847f43c91e27f43bd4ac08abd528efcf00f2fd, LKMC 52a5fdd7c1d6eadc5900fc76e128995d4849aada.

==== NFS

TODO: get working.

<<9p>> is better with emulation, but let's just get this working for fun.

First make sure that this works: xref:guest-to-host-networking[xrefstyle=full].

Then, build the kernel with NFS support:

....
./build-linux --config-fragment linux_config/nfs
....

Now on host:

....
sudo apt-get install nfs-kernel-server
....

Now edit `/etc/exports` to contain:

....
/tmp *(rw,sync,no_root_squash,no_subtree_check)
....

and restart the server:

....
sudo systemctl restart nfs-kernel-server
....

Now on guest:

....
mkdir /mnt/nfs
mount -t nfs 10.0.2.2:/tmp /mnt/nfs
....

TODO: failing with:

....
mount: mounting 10.0.2.2:/tmp on /mnt/nfs failed: No such device
....

And now the `/tmp` directory from host is not mounted on guest!

If you don't want to start the NFS server after the next boot automatically so save resources, https://askubuntu.com/questions/19320/how-to-enable-or-disable-services[do]:

....
systemctl disable nfs-kernel-server
....

== Operating systems

https://en.wikipedia.org/wiki/Operating_system

* <>
* <>
* <>
* <>
* <>

== Linux kernel

https://en.wikipedia.org/wiki/Linux_kernel

=== Linux kernel configuration

==== Modify kernel config

To modify a single option on top of our <>, do:

....
./build-linux --config 'CONFIG_FORTIFY_SOURCE=y'
....

Kernel modules depend on certain kernel configs, and therefore in general you might have to clean and rebuild the kernel modules after changing the kernel config:

....
./build-modules --clean
./build-modules
....

and then proceed as in <>.

You might often get way without rebuilding the kernel modules however.

To use an extra kernel config fragment file on top of our defaults, do:

....
printf '
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
' > data/myconfig
./build-linux --config-fragment 'data/myconfig'
....

To use just your own exact `.config` instead of our defaults ones, use:

....
./build-linux --custom-config-file data/myconfig
....

There is also a shortcut `--custom-config-file-gem5` to use the <>.

The following options can all be used together, sorted by decreasing config setting power precedence:

* `--config`
* `--config-fragment`
* `--custom-config-file`

To do a clean menu config yourself and use that for the build, do:

....
./build-linux --clean
./build-linux --custom-config-target menuconfig
....

But remember that every new build re-configures the kernel by default, so to keep your configs you will need to use on further builds:

....
./build-linux --no-configure
....

So what you likely want to do instead is to save that as a new `defconfig` and use it later as:

....
./build-linux --no-configure --no-modules-install savedefconfig
cp "$(./getvar linux_build_dir)/defconfig" data/myconfig
./build-linux --custom-config-file data/myconfig
....

You can also use other config generating targets such as `defconfig` with the same method as shown at: xref:linux-kernel-defconfig[xrefstyle=full].

==== Find the kernel config

Get the build config in guest:

....
zcat /proc/config.gz
....

or with our shortcut:

....
./conf.sh
....

or to conveniently grep for a specific option case insensitively:

....
./conf.sh ikconfig
....

Source: link:rootfs_overlay/lkmc/conf.sh[].

This is enabled by:

....
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
....

From host:

....
cat "$(./getvar linux_config)"
....

Just for fun https://stackoverflow.com/questions/14958192/how-to-get-the-config-from-a-linux-kernel-image/14958263#14958263[]:

....
./linux/scripts/extract-ikconfig "$(./getvar vmlinux)"
....

although this can be useful when someone gives you a random image.

[[kernel-configs-about]]
==== About our Linux kernel configs

By default, link:build-linux[] generates a `.config` that is a mixture of:

* a base config extracted from Buildroot's minimal per machine `.config`, which has the minimal options needed to boot as explained at: xref:buildroot-kernel-config[xrefstyle=full].
* small overlays put top of that

To find out which kernel configs are being used exactly, simply run:

....
./build-linux --dry-run
....

and look for the `merge_config.sh` call. This script from the Linux kernel tree, as the name suggests, merges multiple configuration files into one as explained at: https://unix.stackexchange.com/questions/224887/how-to-script-make-menuconfig-to-automate-linux-kernel-build-configuration/450407#450407

For each arch, the base of our configs are named as:

....
linux_config/buildroot-
....

e.g.: link:linux_config/buildroot-x86_64[].

These configs are extracted directly from a Buildroot build with link:update-buildroot-kernel-configs[].

Note that Buildroot can `sed` override some of the configurations, e.g. it forces `CONFIG_BLK_DEV_INITRD=y` when `BR2_TARGET_ROOTFS_CPIO` is on. For this reason, those configs are not simply copy pasted from Buildroot files, but rather from a Buildroot kernel build, and then minimized with `make savedefconfig`: https://stackoverflow.com/questions/27899104/how-to-create-a-defconfig-file-from-a-config

On top of those, we add the following by default:

* link:linux_config/min[]: see: xref:linux-kernel-min-config[xrefstyle=full]
* link:linux_config/default[]: other optional configs that we enable by default because they increase visibility, or expose some cool feature, and don't significantly increase build time nor add significant runtime overhead
+
We have since observed that the kernel size itself is very bloated compared to `defconfig` as shown at: xref:linux-kernel-defconfig[xrefstyle=full].

[[buildroot-kernel-config]]
===== About Buildroot's kernel configs

To see Buildroot's base configs, start from https://github.com/buildroot/buildroot/blob/2018.05/configs/qemu_x86_64_defconfig[`buildroot/configs/qemu_x86_64_defconfig`].

That file contains `BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/qemu/x86_64/linux-4.15.config"`, which points to the base config file used: https://github.com/buildroot/buildroot/blob/2018.05/board/qemu/x86_64/linux-4.15.config[board/qemu/x86_64/linux-4.15.config].

`arm`, on the other hand, uses https://github.com/buildroot/buildroot/blob/2018.05/configs/qemu_arm_vexpress_defconfig[`buildroot/configs/qemu_arm_vexpress_defconfig`], which contains `BR2_LINUX_KERNEL_DEFCONFIG="vexpress"`, and therefore just does a `make vexpress_defconfig`, and gets its config from the Linux kernel tree itself.

====== Linux kernel defconfig

To boot https://stackoverflow.com/questions/41885015/what-exactly-does-linux-kernels-make-defconfig-do[defconfig] from disk on Linux and see a shell, all we need is these missing virtio options:

....
./build-linux \
--linux-build-id defconfig \
--custom-config-target defconfig \
--config CONFIG_VIRTIO_PCI=y \
--config CONFIG_VIRTIO_BLK=y \
;
./run --linux-build-id defconfig
....

Oh, and check this out:

....
du -h \
"$(./getvar vmlinux)" \
"$(./getvar --linux-build-id defconfig vmlinux)" \
;
....

Output:

....
360M /path/to/linux-kernel-module-cheat/out/linux/default/x86_64/vmlinux
47M /path/to/linux-kernel-module-cheat/out/linux/defconfig/x86_64/vmlinux
....

Brutal. Where did we go wrong?

The extra virtio options are not needed if we use <>:

....
./build-linux \
--linux-build-id defconfig \
--custom-config-target defconfig \
;
./run --initrd --linux-build-id defconfig
....

On aarch64, we can boot from initrd with:

....
./build-linux \
--arch aarch64 \
--linux-build-id defconfig \
--custom-config-target defconfig \
;
./run \
--arch aarch64 \
--initrd \
--linux-build-id defconfig \
--memory 2G \
;
....

We need the 2G of memory because the CPIO is 600MiB due to a humongous amount of loadable kernel modules!

In aarch64, the size situation is inverted from x86_64, and this can be seen on the vmlinux size as well:

....
118M /path/to/linux-kernel-module-cheat/out/linux/default/aarch64/vmlinux
240M /path/to/linux-kernel-module-cheat/out/linux/defconfig/aarch64/vmlinux
....

So it seems that the ARM devs decided rather than creating a minimal config that boots QEMU, to try and make a single config that boots every board in existence. Terrible!

Bibliography: https://unix.stackexchange.com/questions/29439/compiling-the-kernel-with-default-configurations/204512#204512

Tested on 1e2b7f1e5e9e3073863dc17e25b2455c8ebdeadd + 1.

====== Linux kernel min config

link:linux_config/min[] contains minimal tweaks required to boot gem5 or for using our slightly different QEMU command line options than Buildroot on all archs.

It is one of the default config fragments we use, as explained at: xref:kernel-configs-about[xrefstyle=full]>.

Having the same config working for both QEMU and gem5 (oh, the hours of bisection) means that you can deal with functional matters in QEMU, which runs much faster, and switch to gem5 only for performance issues.

We can build just with `min` on top of the base config with:

....
./build-linux \
--arch aarch64 \
--config-fragment linux_config/min \
--custom-config-file linux_config/buildroot-aarch64 \
--linux-build-id min \
;
....

vmlinux had a very similar size to the default. It seems that link:linux_config/buildroot-aarch64[] contains or implies most link:linux_config/default[] options already? TODO: that seems odd, really?

Tested on 649d06d6758cefd080d04dc47fd6a5a26a620874 + 1.

===== Notable alternate gem5 kernel configs

Other configs which we had previously tested at 4e0d9af81fcce2ce4e777cb82a1990d7c2ca7c1e are:

* `arm` and `aarch64` configs present in the official ARM gem5 Linux kernel fork as described at: xref:gem5-arm-linux-kernel-patches[xrefstyle=full]. Some of the configs present there are added by the patches.
* Jason's magic `x86_64` config: http://web.archive.org/web/20171229121642/http://www.lowepower.com/jason/files/config which is referenced at: http://web.archive.org/web/20171229121525/http://www.lowepower.com/jason/setting-up-gem5-full-system.html[]. QEMU boots with that by removing `# CONFIG_VIRTIO_PCI is not set`.

=== Kernel version

==== Find the kernel version

We try to use the latest possible kernel major release version.

In QEMU:

....
cat /proc/version
....

or in the source:

....
cd "$(./getvar linux_source_dir)"
git log | grep -E ' Linux [0-9]+\.' | head
....

==== Update the Linux kernel

During update all you kernel modules may break since the kernel API is not stable.

They are usually trivial breaks of things moving around headers or to sub-structs.

The userland, however, should simply not break, as Linus enforces strict backwards compatibility of userland interfaces.

This backwards compatibility is just awesome, it makes getting and running the latest master painless.

This also makes this repo the perfect setup to develop the Linux kernel.

In case something breaks while updating the Linux kernel, you can try to bisect it to understand the root cause, see: xref:bisection[xrefstyle=full].

===== Update the Linux kernel LKMC procedure

First, use use the branching procedure described at: xref:update-a-forked-submodule[xrefstyle=full]

Because the kernel is so central to this repository, almost all tests must be re-run, so basically just follow the full testing procedure described at: xref:test-this-repo[xrefstyle=full]. The only tests that can be skipped are essentially the <> tests.

Before comitting, don't forget to update:

* the `linux_kernel_version` constant in link:common.py[]
* the tagline of this repository on:
** this README
** the GitHub project description

==== Downgrade the Linux kernel

The kernel is not forward compatible, however, so downgrading the Linux kernel requires downgrading the userland too to the latest Buildroot branch that supports it.

The default Linux kernel version is bumped in Buildroot with commit messages of type:

....
linux: bump default to version 4.9.6
....

So you can try:

....
git log --grep 'linux: bump default to version'
....

Those commits change `BR2_LINUX_KERNEL_LATEST_VERSION` in `/linux/Config.in`.

You should then look up if there is a branch that supports that kernel. Staying on branches is a good idea as they will get backports, in particular ones that fix the build as newer host versions come out.

Finally, after downgrading Buildroot, if something does not work, you might also have to make some changes to how this repo uses Buildroot, as the Buildroot configuration options might have changed.

We don't expect those changes to be very difficult. A good way to approach the task is to:

* do a dry run build to get the equivalent Bash commands used:
+
....
./build-buildroot --dry-run
....
* build the Buildroot documentation for the version you are going to use, and check if all Buildroot build commands make sense there

Then, if you spot an option that is wrong, some grepping in this repo should quickly point you to the code you need to modify.

It also possible that you will need to apply some patches from newer Buildroot versions for it to build, due to incompatibilities with the host Ubuntu packages and that Buildroot version. Just read the error message, and try:

* `git log master -- packages/`
* Google the error message for mailing list hits

Successful port reports:

* v3.18: https://github.com/cirosantilli/linux-kernel-module-cheat/issues/39#issuecomment-438525481

=== Kernel command line parameters

Bootloaders can pass a string as input to the Linux kernel when it is booting to control its behaviour, much like the `execve` system call does to userland processes.

This allows us to control the behaviour of the kernel without rebuilding anything.

With QEMU, QEMU itself acts as the bootloader, and provides the `-append` option and we expose it through `./run --kernel-cli`, e.g.:

....
./run --kernel-cli 'foo bar'
....

Then inside the host, you can check which options were given with:

....
cat /proc/cmdline
....

They are also printed at the beginning of the boot message:

....
dmesg | grep "Command line"
....

See also:

* https://unix.stackexchange.com/questions/48601/how-to-display-the-linux-kernel-command-line-parameters-given-for-the-current-bo
* https://askubuntu.com/questions/32654/how-do-i-find-the-boot-parameters-used-by-the-running-kernel

The arguments are documented in the kernel documentation: https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html

When dealing with real boards, extra command line options are provided on some magic bootloader configuration file, e.g.:

* GRUB configuration files: https://askubuntu.com/questions/19486/how-do-i-add-a-kernel-boot-parameter
* Raspberry pi `/boot/cmdline.txt` on a magic partition: https://raspberrypi.stackexchange.com/questions/14839/how-to-change-the-kernel-commandline-for-archlinuxarm-on-raspberry-pi-effectly

==== Kernel command line parameters escaping

Double quotes can be used to escape spaces as in `opt="a b"`, but double quotes themselves cannot be escaped, e.g. `opt"a\"b"`

This even lead us to use base64 encoding with `--eval`!

==== Kernel command line parameters definition points

There are two methods:

* `__setup` as in:
+
....
__setup("console=", console_setup);
....
* `core_param` as in:
+
....
core_param(panic, panic_timeout, int, 0644);
....

`core_param` suggests how they are different:

....
/**
* core_param - define a historical core kernel parameter.

...

* core_param is just like module_param(), but cannot be modular and
* doesn't add a prefix (such as "printk."). This is for compatibility
* with __setup(), and it makes sense as truly core parameters aren't
* tied to the particular file they're in.
*/
....

==== rw

By default, the Linux kernel mounts the root filesystem as readonly. TODO rationale?

This cannot be observed in the default BusyBox init, because by default our link:rootfs_overlay/etc/inittab[] does:

....
/bin/mount -o remount,rw /
....

Analogously, Ubuntu 18.04 does in its fstab something like:

....
UUID=/dev/sda1 / ext4 errors=remount-ro 0 1
....

which uses default mount `rw` flags.

We have however removed those setups init setups to keep things more minimal, and replaced them with the `rw` kernel boot parameter makes the root mounted as writable.

To observe the default readonly behaviour, hack the link:run[] script to remove <>, and then run on a raw shell:

....
./run --kernel-cli 'init=/bin/sh'
....

Now try to do:

....
touch a
....

which fails with:

....
touch: a: Read-only file system
....

We can also observe the read-onlyness with:

....
mount -t proc /proc
mount
....

which contains:

....
/dev/root on / type ext2 (ro,relatime,block_validity,barrier,user_xattr)
....

and so it is Read Only as shown by `ro`.

==== norandmaps

Disable userland address space randomization. Test it out by running <> twice:

....
./run --eval-after './linux/rand_check.out;./linux/poweroff.out'
./run --eval-after './linux/rand_check.out;./linux/poweroff.out'
....

If we remove it from our link:run[] script by hacking it up, the addresses shown by `linux/rand_check.out` vary across boots.

Equivalent to:

....
echo 0 > /proc/sys/kernel/randomize_va_space
....

=== printk

`printk` is the most simple and widely used way of getting information from the kernel, so you should familiarize yourself with its basic configuration.

We use `printk` a lot in our kernel modules, and it shows on the terminal by default, along with stdout and what you type.

Hide all `printk` messages:

....
dmesg -n 1
....

or equivalently:

....
echo 1 > /proc/sys/kernel/printk
....

See also: https://superuser.com/questions/351387/how-to-stop-kernel-messages-from-flooding-my-console

Do it with a <