Stefan Römer - Freelance Software Developer - Open for work

Stefan Römer

Find out more about me, my skills, interests and experience. Contact welcome.

Building a custom firmware for the Banana Pi BPI-F3

I started building the firmware components for booting the BPI-F3 myself. While SpacemiT provides the source code of the different components the official firmware is built from forks of older versions and without the commit history. In my attempt to build a custom firmware I will use mainline versions where possible and otherwise create my own forks from mainline sources and merge necessary changes into it.

Contents:

Status

name build test comment
bootinfo generated files are identical to vendor files
fsbl no build yet
opensbi boot ok (opensbi messages shown in console)
u-boot doesn’t boot linux and only detects 4GB RAM

Bootinfo

The first part of the boot process we have control of is the bootinfo binary. In the SpacemiT BSP Bianbu Linux the bootinfo is generated by a Python script during the build of u-boot.

With bpif3_gen_binary I extracted and slightly modified this script and according configuration files to be used independent from the u-boot source tree.

The content of the generated bootinfo is specified in a JSON file and could be modified. However, currently I do not see a need for that.

FSBL (U-Boot SPL)

The FSBL (First Stage Boot Loader) is generated by the same script as the bootinfo but requires u-boot-spl.bin at a path specified in the according JSON configuration file.

While mainline u-boot already has support for the BPI-F3, it only generates a u-boot proper stage but not a u-boot spl stage.

OpenSBI

Mainline OpenSBI does not have support for the SpacemiT K1 (yet). Luckily cyyself already merged support for the SoC into a clean fork of OpenSBI.

With bpif3_opensbi I merged his work on to the latest version of mainline OpenSBI.

U-Boot

Mainline u-boot has support for the BPI-F3. It lacks support for the u-boot spl stage but builds the u-boot proper main stage.

Getting started with the Banana Pi BPI-F3

I recently bought an Banana Pi BPI-F3. The BPI-F3 is a development board based on an SpacemiT K1 8-core RISC-V chip.

Here I will write up how I get it up and running. My plan is to build a custom Gentoo linux image for it - but that’s a long way to go. At first I will connect everything and use an existing image.

Requirements:

  • BPI-F3
  • 12V Power Supply
  • USB to serial adapter (3.3V TTL)
  • USB-C to USB-A cable for connecting the BPI-F3 to the host

Contents:

1 - Serial console

1.1 Connecting a USB to serial adapter

Connect GND, RX and TX of the UART0 Debug connector to an USB serial adapter (3.3V TTL). Once connected a terminal can be used to connect with the following settings:

baud rate: 115200
data bits: 8
parity   : none
stop bits: 1

1.2 Output with empty eMMC and no SD card inserted

In the default configuration the BPI-F3 tries to boot from SD card first and falls back to booting from eMMC. However, on a new BPI-F3 the eMMC doesn’t contain any data and therefore without an SD card the output looks as shown below:

␀sys: 0x0
try sd...
bm:3
ERROR:   CMD8
ERROR:   sd f! l:76
bm:0
ERROR:   emmc: invalid bootinfo magic code:0x0
ERROR:   invalid bootinfo image.
ERROR:   entering download mode.
ROM: usb download handler
usb2d_initialize : enter
Controller Run

So in the next step we need to prepare an SD card and/or write an image to the eMMC.

2 - Installing an image

I initially planned to start with Bianbu Linux installed on an SD card, but for some reason this was not booting. The image was written correctly to the SD card and I could not figure out the problem yet. A Gentoo Linux image however booted scuccessfully after sorting out one old SD card which corrupted the image on my first attempt. I therefore went on and installed Bianbu linux on eMMC.

2.1 - Boot selection settings

For booting from eMMC the DIP switches must be set correctly. In the factory default settings all 4 switches are set to off. Switch 1 and 2 are used for selecting the boot device and the factory default already selects eMMC as we want it.

Here some details about the DIP switch functions (see BPI-F3 schematic from the BananaPi Docs):

# BPI-F3 SW1 functions

-------------
|   off   on |
| 1 -o------ | - boot select 1 (QSPI_DATA0)
| 2 -o------ | - boot select 2 (QSPI_DATA1)
| 3 -o------ | - not connected
| 4 -o------ | - not connected
-------------
     ^---------- factory defaults (all 4 switches set to off)

Boot select:
     1  |  2  | function
    ====|=====|====================
    off | off | TF card -> eMMC (factory default)
    off | on  | TF card -> SPI NAND
    on  | off | TF card -> SPI NOR
    on  | on  | TF card only

2.2 - Enter download mode

To enter download mode on the BPI-F3 hold down the FDL button and then reset the device. Under some circumstances the device also enters download mode automatically. This for example is the case on a failed boot attempt with an empty eMMC.

Once the device is in download mode it shows following messages on the serial debug console:

ROM: usb download handler
usb2d_initialize : enter
Controller Run

After connecting the PD12V USB port of the BPI-F3 to the host a fastboot devices command should show the device as shown below:

$ fastboot devices
dfu-device	 DFU download

Now we can continue to install the image with fastboot.

2.3 - Enter U-Boot fastboot mode

At the current state we only have a simple download mode provided by the boot ROM running. Now we need to upload the FSBL (First Stage Boot Loader) which in case of the BPI-F3 is the U-Boot SPL stage (U-Boot Secondary Program Loader - second after the code from the boot ROM). After this we also can upload the main U-Boot binary (a.k.a. U-Boot proper) which then provides the fastboot mode we use for flashing the firmware.

The Boot Development Guide in the Bianbu Linux Docs describes this and provides additional information about the boot process of the SpacemiT K1.

For entering the U-Boot fastboot mode run the following commands:

fastboot stage factory/FSBL.bin
fastboot continue

# wait some time to ensure u-boot is ready
sleep 1

fastboot stage u-boot.itb
fastboot continue

The required files can be found in the zip files available for download on the SpacemiT server. Downloads ending in img.zip contain an image to be written onto an SD card. On the other hand downloads ending in only zip contain separate files we can use to upload the firmware via fastboot - that’s what we need.

-> Here the file for Bianbu Linux v2.0.4 (minimal variant for K1)

2.4 - Flash eMMC

Once the transition to the U-Boot fastboot mode is performed we can create the required partitions and write the firmware to the eMMC. All required files are included in the zip file we already downloaded.

For creating the partitions and writing the firmware run following commands:

fastboot flash gpt partition_universal.json
fastboot flash bootinfo factory/bootinfo_emmc.bin
fastboot flash fsbl factory/FSBL.bin
fastboot flash env env.bin
fastboot flash opensbi fw_dynamic.itb
fastboot flash uboot u-boot.itb
fastboot flash bootfs bootfs.ext4
fastboot flash rootfs rootfs.ext4

Now a reset should result in the BPI-F3 booting up the installed Linux. The boot process can be watched on the serial debug console.

3 - What to do next?

  • Play around with the device
  • Understand the boot process etc.
  • Investigate why the SD card images of Bianbu Linux did not boot
  • Custom builds of OpenSBI, U-Boot and Linux
  • and maybe more …

DANE-SMTP enabled

After setting up my mail server I already set up SPF, DKIM, DMARC and MTA-STS but was not yet sure about how to deal with DANE.

DANE uses a TLSA entry on the DNS server to publish a services certificate or key. This means that a client can verify that it is talking to the correct server without relying on any kind of CA. All this is based on DNSSEC which ensures the authenticity and data integrity of the DNS entries.

While this concept in general is pretty neat, it has the requirement of DNS entries to be updated when a certificate or key changes. Additionally just simply updating an entry will no be enough, since other DNS servers still might have the old key cached. Therefore a more sophisticated rollover mechanism would be required. Due to the short life of Let’s Encrypt certificates this is even more relevant.

Luckily Certbot provides the option --reuse-key to circumvent the requirement of updating the DNS entries by reusing the existing key. Therefore a TLSA entry can be generated from the current public key and will not require updating as well since the key does not change.

For generation of an according TLSA entry Shumon Huque has a very handy TLSA Record Generator on his website. The entry in this case must be for usage DANE-EE with selector 1 - SPKI. I use a SHA-256 hash and got following result:

_25._tcp.vs.senvang.org. IN TLSA 3 1 1 (
                  5634d5e1ce5f1e4a8ab25cd8335c97ab76e1215e09c157568e5c9a3dc39a
                  a491
)

Alternatively you can generate the hash with openssl:

openssl x509 -in cert.pem -pubkey -noout | openssl ec -pubin --outform der | sha256sum

Obviously this does not resolve the need for a clean rollover when a key change happens, but it massively reduces the frequency and therefore the risk and it gives me time to think about how to get this automated.

Theme update

I use the static site generator Hugo to create this website and chose the Terminal theme originally created by the Github user panr. Here I use my own branch which replaces the font FiraCode with FiraMono to get rid of the ligatures.

Today I found out that panr continued development on his theme after he actually stopped a while ago. A happy surprise! So I just merged his latest version into my branch and updated the website. He actually introduced some major changes by now relying on his new project Terminal.css which simply separates the style from the theme to allow people to use it independently.

This now also easily allows the creation of custom color schemes (compared to having just a set of fixed colors before). I used this to slightly adapt the colors to my company logo.

Besides this there should be no changes here. Thanks panr

A quick update

It’s a long time since I wrote something here so in this time things happened in the background. Here a quick update:

  • I moved the server to a more powerful one (VPS 1000) - still from Netcup. The main reason for this upgrade was my decision to switch to Gentoo Linux on the server as well after my desktop computer is running flawlessly with it since years now and I wanted to make this switch before I started setting up a mail server.

  • I got following services running on Gentoo now:

    1. Certbot to get certificates from Let’s Encrypt
    2. Nginx as web server / proxy
    3. Radicale as CalDAV / CardDAV server
    4. Git via ssh access (I keep it simple and do not run Gitea anymore)
    5. Postfix / Dovecot as mail server
    6. Wireguard as VPN server
  • The mail server is setup with SPF, DKIM, DMARC and MTA-STS. Regarding DANE I am still investigating if I can/want to set this up with the frequently changing Let’s Encrypt certificates.

So what the heck is all that? Here in short:

  • SPF: Specifies the mail servers allowed to send mail for the domain by publishing an entry in the DNS
  • DKIM: Sign outgoing mail and publish the public key in the DNS, so the receipient can verify the signature
  • DMARC: Publish a DMARC DNS entry with instructions on how to handle mail which doesn’t pass SPF/DKIM checks
  • MTA-STS: A combination of DNS and HTTPS to instruct SMTP servers to encrypt communication and verify the used certificate
  • DANE: A DNS entry containing the server certificate (usually just a hash of it) and therefore allowing the communication partner to compare the servers certificate against the one published in DNS

That’s what happened on here in the last months. More to come …

Reverse engineering a BLE body fat scale protocol

I recently bought the ‘smart’ body fat scale Crénot Gofit S2. As many of those modern ‘smart’ devices it uses Bluetooth Low Energy (BLE) and an App on the phone to connect to it. Due to privacy concerns I do not want to use this cloud based App and share my fitness / health data.

Instead I found openScale. An open source App which would give me the same functionality while storing all data locally on my phone. Sadly it did not support my scale so I started investigating if I can add support for it.

Initially I read the How to reverse engineer a Bluetooth 4.x scale notes on the openScale Github page and also created a Bluetooth HCI snoop log as described there, but then ended up following a different approach and wrote a small client in Python by using Bleak.

BLE devices use the Generic Attribute Profile (GATT) and provide services with different characteristics. The BLE scanner App mentioned in the notes became handy to gather information about the scales provided services and characteristics.

With this information I quickly was able to reliably get the weight value from the scale but did not figure out yet how other values - like body fat percentage for example - are transferred.

My client can be found in my scale-communication repo on Github and with knowing the basic communication protocol it also will be possible to implement this in openScale. I also did not give up yet on getting the missing values from my scale - but that’s for another time.

Radicale installation on Devuan Daedalus 5.0

Since years I am running my own CalDAV / CardDAV server to synchronize my calendars and address books. I use Radicale for this and on my Uberspace it was installed via Python’s pip, but now it was time to move this to my own server too and I chose to install the package directly from the Devuan repository via apt-get install radicale.

The installation via apt-get just does the installation but some additional configuration was needed to set everything up.

At first I modified the configuration file /etc/radicale/config and did set
type = htpasswd and htpasswd_encryption = bcrypt (both in the [auth] section).

Then I created a new htpasswd file with my user:
htpasswd -c -B /etc/radicale/users <user>

Finally I started the service and enabled it to be started automatically:
/etc/init.d/radicale start
update-rc.d radicale enable

After this the radicale server is bound to localhost and listening on port 5232. For being accessible from the outside the following configuration must be added to the Nginx configuration (and Nginx being restarted once):

location /radicale/ { # The trailing / is important!
      proxy_pass        http://localhost:5232/; # The / is important!
      proxy_set_header  X-Script-Name /radicale;
      proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header  Host $http_host;
      proxy_pass_header Authorization;
}

Now I only had to copy contact and calendar entries from my Uberspace instance. Done!

Linux Containers (LXC) on Devuan Daedalus 5.0

I started some investigation about how to run Linux Containers on Devuan Linux 5.0 (Daedalus), as I run it on my server. For this I first did set up a local VM which replicates my virtual server. I use qemu for this and created a script called vm‑netcup which is part of my dotfiles repository and sets qemu parameters accordingly.

Installation via apt-get install lxc works as expected. So far so good - but things should not go that smoothly for much longer.

cgroups

LXC relies on cgroups, a Linux kernel feature for isolation, limitation and accounting of resource usage of processes. On distributions using systemd this is set up by systemd, but with Devuan I chose to not use systemd.

You guessed it: lxc-checkconfig and ls /sys/fs/cgroup show that cgroups are not set up by default (at least not with my minimal installation). LXC can work with cgroups v1 but cgroups v2 provide a cleaner, unified hierarchy and therefore are my preferred way.

To mount the cgroup v2 filesystem at boot time, I simply created an additional entry in the /etc/fstab file:
none /sys/fs/cgroup cgroup2 defaults 0 0

In case you want to use cgroups v1 you can install the cgroupfs-mount package which installs a service to perform the required mounts at boot time.

That’s all that is required to set up cgroups for LXC.

unpriviledged vs. priviledged containers

In short: We want unpriviledged containers whenever possible. Those map user ids inside the container to a different range on the host and therefore are the safest option. For example user id 0 (root) in an unpriviledged container would be mapped to user id 100000 on the host. A user id 0 (root) on a priviledged container on the other hand also would be root on the host.

/etc/subuid and /etc/subgid contain the ranges for mapping uid and gid on the host, but those values also need to be reflected in the lxc configuration as described below.

configuration

For configuration LXC distinguishes between system and container configuration. See the according man pages lxc.container.conf and lxc.system.conf for further information.

By default LXC on my system used ~/.local/share/lxc/ for container storage. This resulted in LXC complaining about the users home directory not having x permission for the root user of the container and starting the container failed.

Therefore I decided to move the container storage to /var/lib/lxc/<user>. This directory needs to be created as root and owernship as well as permissions need to be set as follows:
chown <user>:<group> /var/lib/lxc/<user>
chmod 711 /var/lib/lxc/<user>

Once this is done we can continue as regular user. We create a file .config/lxc/lxc.conf and configure the container storage path in it:
lxc.lxcpath = /var/lib/lxc/<user>

/etc/lxc/default.conf contains the system wide container default configuration. We copy this file to ~/.config/lxc/default.conf and modify it to create our user specific defaults.

Be aware that this file only contains the default configuration used during creation of a new container. Once created every container has its own configuration file in its according folder within the container storage path.

We now add the following lines for configuration of the uid and gid mapping. The values used here must match the values defined in /etc/subuid and /etc/subgid for the according user:
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536

Additionaly we change the AppArmor profile as follows:
lxc.apparmor.profile = unconfined

creating a new container

For creation of a new container we use the lxc-create command. lxc-create -t download -n <container-name> is a good start and provides an interactive way to create a new container. In my case I want to install the 3.18 release for the amd64 architecture of the alpine distribution.

This all also can be packed into the command directly:
lxc-create -t download -n <container-name> -- --dist alpine --release 3.18 --arch amd64

network

With all this set up the container still fails to start because it cannot set up its network. By default unprivileged containers cannot use any networking. We need to create an additional configuration file /etc/lxc/lxc-usernet as root. This file must have an entry to allow the user to add veth devices - up to 16 in in my case - to the lxcbr0 bridge:
<user> veth lxcbr0 16

using the container

To start an already existing container the command lxc-start -n <container-name> is used. With everything I described above done the container does start without issues. Previously appearing tmpfs: Bad value for 'uid' error message were related to the Devuan container image used. The errors disappeared after I switched to using the Alpine Linux image.

Once the container is running we can start a process in the container with lxc-attach. For example lxc-attach -n <container-name> bash will provide shell access to the container.

For stopping the container again we use lxc-stop -n <container-name>.

In case the container isn’t running we directly can run a command in it with lxc-execute. For example lxc-execute -n <container-name> bash will provide shell access as before.

conclusion

I am not done with this topic yet but for now this is a good foundation and a playground to further investigate if I want to run containers on the server. I might update this post at some point if I have any points worth mentioning.

Especially the network setup was barely touched here and could be a topic for a separate post at some point. For now that’s it …

I ordered a new virtual server

After running my own server for some years in the past I did move on to using an Uberspace. Uberspace as a hoster is great and offers a lot of flexibility but it has its limitations anyway (for example it is not possible to run a VPN server). I am planning to run a VPN server and also like the admin work overall. So I ordered a small virtual server (VPS 200 G10s) from Netcup again.

Why did I choose Netcup?

The answer to that is easy. It’s cheap and I did have some good experience with it already. For 3.25 Euro/month I get a VPS with 2vCPUs, 2GB RAM and 40GB SSD. For the virtualization they use KVM and via the web interface I can mount any ISO file and install whatever OS I like. The drawback of their cheap virtual servers is the quite low minimum availability of just 99.6% but for my use case that’s not an issue.

Which OS did I install?

After initially even considering Gentoo Linux I quickly dropped that idea due to the limitations of the virtual hardware. I then - after some research - did a kind of test installation of Alpine Linux. It was the first time ever I used Alpine and I must say that I am pretty impressed with it - especially due to its low footprint. Anyway I decided to stick with something I know better and trust a bit more in the long term.

I installed Devuan Linux - Devuan Daedalus 5.0. For those who are not aware, Devuan is a fork of Debian Linux which does not use systemd and therfore I can count on the great selection of software available for Debian while still using the good old sysvinit.

As always I did a minimal installation and add what is required for my needs from there. My base installation with Nginx, Certbot and unattended upgrades uses about 60MB of RAM. A nice small footprint as well even if its not as small as Alpine Linux (but that’s expected).

Further plans?

With Nginx and Certbot up and running I already moved the website on to the new server. Additionally I want to install the following components:

Regarding all this I am not in a hurry. For now I am using the Uberspace and my virtual server in parallel and before installing any of those components I want to do some research about Linux Containers. So far I do not have experience with containers of any kind and I am not a huge fan of Docker and its approach of isolating single applications. Linux Containers on the other hand are available on any Linux system, are OS-level virtualization and do not rely on infrastructure controlled by a single company. So my idea is to isolate some parts of my installation into Linux Containers (probably using Devuan images as well). Therefore I should gain some flexibility when updating the main server OS and be able to move and update some parts independently.

In any case containers are an interesting topic and digging deeper into it will be worth it …