My team here at GoDaddy builds a lot of software in Rust that we need to operate on a wide range of machine architectures and operating systems:
- x86-64 for Linux (GNU libc), MacOS and Windows (ideally with MinGW)
- aarch64/arm64 for Linux (GNU libc) and MacOS
For a long time this meant we'd spin up a whole bunch of CI machines, one for each platform, and compile the tools natively for that platform. However, this wasn't the best option available for us. Since we manage our own runner pools (using GitHub Actions), this means we need to maintain sets of these machines, and for MacOS and Windows, we're relying on the public runner pool infrastructure. These machines are both more costly and disallow us from accessing internal company systems (including some internal only software libraries). We needed to look at other options and we thought, "Rust is supposed to be pretty good at handling cross-compiling, so why don't we try using it for that purpose?"
Unveiling the complexity
In the journey of cross-compiling Rust software, challenges often arise that demand creative solutions. From toolchain dependencies to platform-specific libraries, each step presents unique hurdles.
We already cross-compile some of our work, as we write HTTP proxy filters in Rust and compile them to WASM based on the proxy-wasm spec. My naïve first attempt was installing the Rust toolchain for a given platform triple (why is it called a triple when there are more than three items in it sometimes?) and giving it a shot. (I'm running this all on an x86_64 Debian machine, so package names are specific to that platform).
Here's what I did:
~/top-secret-work-project# rust up target add x86_64-pc-windows-gnu
~/top-secret-work-project# cargo build ---target x86_64-pc-windows-gnu
error: failed to run custom build command for `ring v0.17.5`
Caused by:
process didn't exit successfully: `/root/top-secret-work-project/target/debug/build/ring-9e2d74aa803932bf/build-script-build` (exit status: 1)
--- stdout
# output elided...
running: "x86_64-w64-mingw32-gcc" "-O0" "-ffunction-sections" "-fdata-sections" "-gdwarf-2" "-fno-omit-frame-pointer" "-m64" "-I" "include" "-I" "/root/top-secret-work-project/target/x86_64-pc-windows-gnu/debug/build/ring-8250d53ba97b24ed/out" "-Wall" "-Wextra" "-fvisibility=hidden" "-std=c1x" "-pedantic" "-Wall" "-Wextra" "-Wbad-function-cast" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wenum-compare" "-Wfloat-equal" "-Wformat=2" "-Winline" "-Winvalid-pch" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wnested-externs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wstrict-prototypes" "-Wundef" "-Wuninitialized" "-Wwrite-strings" "-g3" "-DNDEBUG" "-o" "/home/todd/top-secret-work-project/target/x86_64-pc-windows-gnu/debug/build/ring-8250d53ba97b24ed/out/crypto/curve25519/curve25519.o" "-c" "crypto/curve25519/curve25519.c"
--- stderr
error occurred: Failed to find tool. Is `x86_64-w64-mingw32-gcc` installed?
Oh no! What's this "Is x86_64-w64-mingw32-gcc
installed?" Wait, why are we calling gcc
? I thought this was Rust?!
Leveraging the power of clang
Well, it is Rust, but it looks like we're actually compiling a crypto library written in C and then accessing it from Rust. Therefore our Rust toolchain needs to be able to invoke a functional C compiler for our target platform. Well, clang
is supposed to be cross-platform out of the box right? Let's give that a shot, and tell cargo
that we want to use clang
as our C compiler, using the CC
environment variable:
~/top-secret-work-project# CC=clang cargo build --target x86_64-pc-windows-gnu
# bunch of log lines omitted
cargo:warning=In file included from crypto/curve25519/curve25519.c:22:
cargo:warning=In file included from include/ring-core/mem.h:60:
cargo:warning=In file included from include/ring-core/base.h:64:
cargo:warning=In file included from /usr/lib/llvm-15/lib/clang/15.0.7/include/stdint.h:52:
cargo:warning=/usr/include/stdint.h:26:10: fatal error: 'bits/libc-header-start.h' file not found
cargo:warning=#include <bits/libc-header-start.h>
cargo:warning= ^~~~~~~~~~~~~~~~~~~~~~~~~~
cargo:warning=1 error generated.
OK! clang
certainly does invoke and there's no more complaints about a missing gcc
compiler, however, we are missing some standard libraries. (This is a common theme if you're not compiling pure-Rust related software). We could try to figure out how to install only the support libraries for our target system, but there are a bunch of packages that supply a full cross-platform toolchain instead of having to install individual libraries. We can just install the mingw-w64
package from Debian (which, if you'll note, has gcc-mingw-w64
listed as a dependency, which the compiler we initially got our error about in the first step). This gives us a complete gcc
toolchain (linker, archiver, compiler, etc.) for building software on our current architecture/platform for the target platform:
~/top-secret-work-project# apt install mingw-w64
~/top-secret-work-project# cargo build --target x86_64-pc-windows-gnu
#[lots of compiling going on]
~/top-secret-work-project# file target/x86_64-pc-windows-gnu/debug/top-secret-work-project.exe
target/x86_64-pc-windows-gnu/debug/.exe: PE32+ executable (console) x86-64, for MS Windows, 21 sections
file
is a fantastic tool that looks at the given file and gives you information about it. With it, we can validate that our output is what we expect it to be, PE32+
(the file type for Windows executables).
Cross-Compiling for aarch64
OMG that one was really easy (after we installed the right toolchain).
Let's try something a little closer to home and see if we can aarch64
for Linux. If we try that old clang
trick again, we'll see we're missing support libraries for that target as well. Unfortunately the cross-platform compiler packages aren't all named similarly or this would be easier, but we'll search the packages for bookworm for aarch64 gcc
. You'll find there is a gcc-aarch64-linux-gnu
package. Let's install that and see what we get.
~/top-secret-work-project# apt install gcc-arch64-linux-gnu
~/top-secret-work-project# cargo build --target aarch64-unknown-linux-gnu
#[again there is a lot of compiling]
/usr/bin/ld: ~/top-secret-work-project/target/aarch64-unknown-linux-gnu/debug/deps/top-secret-work-project-89e314a14d73e562.105y1p0cy3ffj42o.rcgu.o: error adding symbols: file in wrong format
collect2: error: ld returned 1 exit status
Sadly this doesn't work! It looks like our linker ld
is not the proper linker for this platform.
Why Cargo can figure out to tell rustc
to use the proper C compiler for the arch, but not the proper linker is beyond me. However, we can tell Cargo that rustc
should use a different linker by setting the environment variable RUSTFLAGS="-Clinker=[path to linker]"
. When we installed the aarch64
cross compile toolchain, we also got a proper linker for that platform (aarch64-linux-gnu-ld
), so we can give this a try without installing anything else:
~/top-secret-work-project# RUSTFLAGS="-Clinker=aarch64-linux-gnu-ld" cargo build --target aarch64-unknown-linux-gnu
#[compiler nonsense]
= note: aarch64-linux-gnu-ld: cannot find -lgcc_s: No such file or directory
Um. That's not what we were hoping to see here. It seems like we're still missing some specific libraries somehow?
The problem is that it's trying to find libgcc_s.so
and it's unable to find it because it's not installed in the normal system library search path (which is set up to compile x86_64 software). Again, Cargo is able to figure out the compiler but nothing else, which is annoying, but we can solve this with yet another flag passed in via the RUSTFLAGS
environment variable: -L [path to directory]
. Again, like with the linker, the problem isn't that we don't have the files. Using the gcc-aarch64-linux-gnu
package on Debian they're located in /usr/lib/gcc-cross/aarch64-linux-gnu/12/
.
Let's try this again. (There are other ways to ensure that these files are in the proper path, but this is the easiest):
~/top-secret-work-project# RUSTFLAGS="-Clinker=aarch64-linux-gnu-ld -L /usr/lib/gcc-cross/aarch64-linux-gnu/12/" cargo build --target aarch64-unknown-linux-gnu
#[again a lot of messages]
todd@cross:~/top-secret-work-project# file target/aarch64-unknown-linux-gnu/debug/top-secret-work-project
target/aarch64-unknown-linux-gnu/debug/top-secret-work-project: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, with debug_info, not stripped
Would you look at that! We've got three of our target platforms so far, the last one must be really easy right?
Well, sadly, no. This is where things can get really complex. We want to compile for MacOS, but there's no MacOS toolchain for Debian (or any other Linux distro as far as I can tell). Thankfully this is where the community comes in with osxcross. This is a set of tools for extracting and building a valid toolchain for cross-compiling for MacOS from Linux and BSD!
Cross-Compiling for MacOS
You'll need an Apple account for this and you'll need to ensure you're using your software under the terms of the license Apple provides for its SDKs. (I am not a lawyer, but I'm pretty sure if you're building software designed to run on their computers with their SDK, that's kind of the point).
Follow the instructions on that project to build your toolchain and you'll end up with an SDK full of files, including the necessary compilers, libraries, etc. -- just like in one of the Debian cross-compiling toolchain packages, only located in one directory.
Given we've already been over how to tell rustc
and cargo
to substitute different commands, we won't dive too deep here. You'll note we are using LD_LIBRARY_PATH
instead of -L
to provide library information. LD_LIBRARY_PATH
works a little differently than -L
since it changes the underlying library information used when the C compiler and ld
are run. This is necessary due to the differences in how MacOS (based on FreeBSD) and Linux operate.
We're also going to change our PATH
here since we have to provide the path to the clang
executable twice and this allows us to just write out the program name instead.
~/top-secret-work-project# PATH=[path to osx sdk]/bin:$PATH LD_LIBRARY_PATH=[path to osx sdk]/lib:$LD_LIBRARY_PATH CC=x86_64-apple-darwin22.4-clang RUSTFLAGS="-Clinker=x64_64-apple-darwin22.4-clang -Clink-arg=-undefined -Clink-arg=dynamic_lookup" cargo build --target x86_64-apple-darwin
#[again with the compiling]
~/top-secret-work-project# file target/x86_64-apple-darwin/debug/top-secret-work-project
target/x86_64-apple-darwin/debug/top-secret-work-project: Mach-O 64-bit x86_64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE|HAS_TLV_DESCRIPTORS>
You might notice we passed in clang
as both our compiler and our linker because that's just how this toolchain works. I don't make the rules, but at least it's pretty easy this way. Finally, we also passed in some specific link-arg
flags as well- these get passed to the linker as additional flags for it (they're both additive so they don't replace the set already provided).
A nice part about the MacOS SDK is that it has both aarch64 AND x86_64 binaries in it, so we can just change the name of the C compiler and linker here to make an aarch64
version of this library:
~/top-secret-work-project# PATH=/root/macos-13.4/bin:$PATH CC=aarch64-apple-darwin22.4-clang LD_LIBRARY_PATH=/root/macos-13.4/lib:$LD_LIBRARY_PATH RUSTFLAGS="-Clinker=aarch64-apple-darwin22.4-clang -Clink-arg=-undefined -Clink-arg=dynamic_lookup" cargo build --target aarch64-apple-darwin
#[come on compile!]
~/top-secret-work-project# file target/aarch64-apple-darwin/debug/top-secret-work-project
target/aarch64-apple-darwin/debug/top-secret-work-projects: Mach-O 64-bit arm64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE|HAS_TLV_DESCRIPTORS>
And there we go! In our target/
directory we've now got arch64-apple-darwin
, aarch64-unknown-linux-gnu
, debug
, x86_64-apple-darwin
, x86_64-pc-windows-gnu
(where debug
is our native target).
If you wanted to compile on aarch64
Linux, you'd need the gcc-x86-64-linux-gnu
package instead, but the rest of the targets and instructions should remain the same, as well as they would be for other platforms you might want to target.
You'll need to ensure that there is Rust support for your target.
If you spend a minute searching the internet for this topic you'll also come across various things like cargo-zigbuild
which provides a more complex, but likely more complete solution for this problem.
We can also use some of these tricks to cross-compile other software: if your library requires C++ code, you can provide a different C++ compiler similarly to the CC
environment variable by setting CXX
to the C++ compiler for your target (which is also included in these packages). And to take it just a step further, we also build some Go libraries into C shared libraries, which necessitates using a C compiler in our Go build chain. (As you might recall from my previous article Building a Fluent-Bit Plugin
By using the same CC
flag, we can do:
~/top-secret-work-fluent-bit-project# CGO_ENABLED=1 CC="aarch-linux-gnu-gcc" GOARCH=arm64 GOOS=linux go build -buildmode c-shared
and achieve the same results.
Conclusion
In an adventurous dive into cross-compiling Rust software, we explored the benefits and challenges of adopting this solution to operate our Rust-built software across different machine architectures and operating systems.
As we discovered, cross-compiling can be a trial and error process, one filled with more than a few quirks and bumps along the road. Nonetheless, it's a viable and useful method. In the end, we managed to successfully cross-compile our Rust software for multiple platforms, pointing out that there are more complex solutions such as cargo-zigbuild for those who prefer to embrace the rabbit hole and add more dependencies to their builds.