compile – Hackaday https://hackaday.com Fresh hacks every day Tue, 05 Nov 2024 06:24:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 156670177 I Installed Gentoo So You Don’t Havtoo https://hackaday.com/2024/11/04/i-installed-gentoo-so-you-dont-havtoo/ https://hackaday.com/2024/11/04/i-installed-gentoo-so-you-dont-havtoo/#comments Mon, 04 Nov 2024 15:00:22 +0000 https://hackaday.com/?p=729862 A popular expression in the Linux forums nowadays is noting that someone “uses Arch btw”, signifying that they have the technical chops to install and use Arch Linux, a distribution …read more]]>

A popular expression in the Linux forums nowadays is noting that someone “uses Arch btw”, signifying that they have the technical chops to install and use Arch Linux, a distribution designed to be cutting edge but that also has a reputation of being for advanced users only. Whether this meme was originally posted seriously or was started as a joke at the expense of some of the more socially unaware Linux users is up for debate. Either way, while it is true that Arch can be harder to install and configure than something like Debian or Fedora, thanks to excellent documentation and modern (but optional) install tools it’s no longer that much harder to run than either of these popular distributions.

For my money, the true mark of a Linux power user is the ability to install and configure Gentoo Linux and use it as a daily driver or as a way to breathe life into aging hardware. Gentoo requires much more configuration than any mainline distribution outside of things like Linux From Scratch, and has been my own technical white whale for nearly two decades now. I was finally able to harpoon this beast recently and hope that my story inspires some to try Gentoo while, at the same time, saving others the hassle.

A Long Process, in More Ways Than One

My first experience with Gentoo was in college at Clemson University in the late ’00s. The computing department there offered an official dual-boot image for any university-supported laptop at the time thanks to major effort from the Clemson Linux User Group, although the image contained the much-more-user-friendly Ubuntu alongside Windows. CLUG was largely responsible for helping me realize that I had options outside of Windows, and eventually I moved completely away from it and began using my own Linux-only installation. Being involved in a Linux community for the first time had me excited to learn about Linux beyond the confines of Ubuntu, though, and I quickly became the type of person featured in this relevant XKCD. So I fired up an old Pentium 4 Dell desktop that I had and attempted my first Gentoo installation.

For the uninitiated, the main thing that separates Gentoo from most other distributions is that it is source-based, meaning that users generally must compile the source code for all the software they want to use on their own machines rather than installing pre-compiled binaries from a repository. So, for a Gentoo installation, everything from the bootloader to the kernel to the desktop to the browser needs to be compiled when it is installed. This can take an extraordinary amount of time especially for underpowered machines, although its ability to customize compile options means that the ability to optimize software for specific computers will allow users to claim that time back when the software is actually used. At least, that’s the theory.

It didn’t work out too well for me and my Dell, though, largely because Dell of the era would put bottom-basement, obscure hardware in their budget computers which can make for a frustrating Linux experience even among the more user-friendly distributions due to a general lack of open-source drivers. I still hold a grudge against Dell for this practice in much the same way that I still refuse to use Nvidia graphics cards, but before I learned this lesson I spent weeks one summer in college with this Frankensteined computer, waiting for kernels and desktop environments to compile for days only to find out that there was something critical missing that broke my installations. I did get to a working desktop environment at one point, but made a mistake with it along the way and decided, based on my Debian experiences, that re-installing the operating system was the way to go rather than actually fixing the mistake I had made. I never got back to a working desktop after that and eventually gave up.

This experience didn’t drive me away from Gentoo completely, though. It was always at the back of my mind during any new Linux install I performed, especially if I was doing so on underpowered hardware that could have benefited from Gentoo’s customization. I would try it occasionally again and again only to give up for similar reasons, but finally decided I had gained enough knowledge from my decades as a Debian user to give it a proper go. A lot has changed in the intervening years; in the days of yore an aspiring Gentoo user had to truly start at the ground up, even going as far as needing to compile a compiler. These days only Gentoo developers take these fundamental steps, providing end users with a “Stage 3” tarball which contains the core needed to install the rest of Gentoo.

Bringing Out The Best of Old Hardware

And I do have a piece of aging hardware that could potentially benefit from a Gentoo installation. My mid-2012 Macbook Pro (actually featured in this article) is still a fairly capable machine, especially since I only really need a computer these days for light Internet browsing and writing riveting Hackaday articles. Apple long ago dropped support for this machine in macOS meaning that it’s no longer a good idea to run its native operating system. In my opinion, though, these older, pre-butterfly Macs are still excellent Linux machines aside from minor issues like finding the correct WiFi drivers. (It also can’t run libreboot, but it’s worth noting that some Macs even older than mine can.) With all of that in mind I got to work compiling my first Linux kernel in years, hoping to save my old Macbook from an e-waste pile.

There’s a lot expected of a new Gentoo user even with modern amenities like the stage 3 tarball (and even then, you have to pick a stage file from a list of around 50 options), and although the handbooks provided are fairly comprehensive they can be confusing or misleading in places. (It’s certainly recommended to read the whole installation guide first and even perform a trial installation in a virtual machine before trying it on real hardware.) In addition to compiling most software from source (although some popular packages like Firefox, LibreOffice, and even the kernel itself are available as precompiled binaries now), Gentoo requires the user to configure what are called USE flags for each package which specify that package’s compile options. A global USE flag file is also maintained to do things like build GNOME, Bluetooth, even 32-bit support into every package, while specific package USE flags are maintained in other separate files. For example, when compiling GIMP, users can choose which image formats they want their installation of GIMP to support. There’s a second layer of complexity here too as certain dependencies for packages can be “masked” or forbidden from being installed by default, so the user will also need to understand why certain things are masked and manually unmask them if the risk is deemed acceptable.

One thing that Gentoo has pioneered in recent years is the use of what it calls distribution kernels. These are kernel configurations with sane defaults, meaning that that they’ll probably work for most users on most systems on the first try. From there, users can begin tweaking the kernel for their use case once they have a working installation, but they don’t have to do that leg work during the installation process anymore. Of course, in true Gentoo fashion, you can still go through the process of configuring the kernel manually during the install if you choose to.

Aside from compiling a kernel, Gentoo also requires the user to make other fundamental choices about their installation during the install process that most other major distributions don’t. Perhaps the biggest one is that the user has to choose an init system, the backbone of the operating system’s startup and service management systems. Generally most distributions decide for you, with most larger distributions like Debian, Fedora, and Arch going with systemd by default. Like anything in the Linux world, systemd is controversial for some, so there are alternatives with OpenRC being the one with the most acceptance in the Gentoo world. I started out with OpenRC in my installations but found a few pieces of software that I use regularly don’t play well with it, so I started my build over and now use systemd. The user also can select between a number of different bootloaders, and I chose the tried-and-true Grub seeing no compelling reason to change at the moment.

In addition, there’s no default desktop environment, so you’ll also need to choose between GNOME, KDE, XFCE, any other desktop environment, or among countless window managers. The choice to use X or Wayland is up to you as well. For what it’s worth, I can at least report that GNOME takes about three times as long to compile as the kernel itself does, so keep that in mind if you’re traveling this path after me.

It’s also possible you’ll need to install a number of drivers for hardware, some of which might be non-free and difficult to install in Gentoo while they might be included by default in distributions like Ubuntu. And, like everything else, they’ll need to be compiled and configured on your machine as well. For me specifically, Gentoo was missing the software to control the fans on my MacBook Pro, but this was pretty easy to install once I found it. There’s an additional headache here as well with the Broadcom Wi-Fi cards found in older Macs, which are notoriously difficult pieces of hardware to work with in the Linux world. I was eventually able to get Wi-Fi working on my MacBook Pro, but I also have an 11″ MacBook Air from the same era that has a marginally different wireless chipset that I still haven’t been able to get to work in Gentoo, giving me flashbacks to my experience with my old Dell circa 2007.

This level of granularity when building software and an overall installation is what gives Gentoo the possibility for highly optimized installations, as every package can be configured for the user’s exact use case for every package down to the kernel itself. It’s also a rolling release model similar to Arch, so in general the newest versions of software will be available for it as soon as possible while a Debian user might have to wait a year or two for the next stable release.

A Few Drawbacks

It’s not all upside, though. For those without a lot of Gentoo experience (including myself) it’s possible to do something like spend a day and a half compiling a kernel or desktop environment only to find out a critical feature wasn’t built, and then have to spend another day and a half compiling it again with the correct USE flags. Or to use the wrong stage file on the first try, or realize OpenRC won’t work as an init system for a specific use case, or having Grub inscrutably be unable to find the installation. Also, don’t expect Gentoo to be faster out-of-the-box than Debian or Fedora without a customization effort, either; for me Gentoo was actually slower than Debian in my benchmarks without a few kernel and package re-compiles. With enough persistence and research, though, it’s possible to squeeze every bit of processing power out of a computer this way.

Personally, I’m not sure I’m willing to go through the amount of effort to migrate my workstations (and especially my servers) to Gentoo because of how much extra configuration is required for often marginal performance gains thanks to the power and performance capabilities of modern hardware. Debian Stable will likely remain my workhorse for the time being for those machines, and I wouldn’t recommend anyone install Gentoo who doesn’t want to get into the weeds with their OS. But as a Linux hobbyist there’s a lot to be said for using other distributions that are a little more difficult to use than Debian or even Arch, although I’d certainly recommend using a tool like Clonezilla to make backups of your installation from time to time so if you do make the same mistakes I made in college you can more easily restore your system. For me, though, I still plan to keep Gentoo on my MacBook Pro since it’s the machine that I tinker with the most in the same way that a classic car enthusiast wants to keep their vehicle on the road and running as well as it did when it was new. It also lets me end forum posts with a sardonic “I use Gentoo, btw” to flex on the Arch users, which might be the most important thing of all.

]]>
https://hackaday.com/2024/11/04/i-installed-gentoo-so-you-dont-havtoo/feed/ 52 729862 gentoo
Turning the PS4 into a Useful Linux Machine https://hackaday.com/2022/02/10/turning-the-ps4-into-a-useful-linux-machine/ https://hackaday.com/2022/02/10/turning-the-ps4-into-a-useful-linux-machine/#comments Fri, 11 Feb 2022 00:00:32 +0000 https://hackaday.com/?p=519295 When the PlayStation 3 first launched, one of its most lauded features was its ability to officially run full Linux distributions. This was of course famously and permanently borked by …read more]]>

When the PlayStation 3 first launched, one of its most lauded features was its ability to officially run full Linux distributions. This was of course famously and permanently borked by Sony with a software update after a few years, presumably since the console was priced too low to make a profit and Sony didn’t want to indirectly fund server farms made out of relatively inexpensive hardware. Of course a decision like this to keep Linux off a computer system is only going to embolden Linux users to put it on those same systems, and in that same vein this project turns a more modern Playstation 4 into a Kubernetes cluster with the help of the infamous OS.

The Playstation 4’s hardware is a little dated by modern desktop standards but it is still quite capable as a general-purpose computer provided you know the unofficial, unsupported methods of installing Psxitarch Linux on one. This is a distribution based on Arch and built specifically for the PS4, but to get it to run the docker images that [Zhekun Hu] wanted to use some tinkering with the kernel needed to be done. With some help from the Gentoo community a custom kernel was eventually compiled, and after spending some time in what [Zhekun Hu] describes as “Linux Kernel Options Hell” eventually a working configuration was found.

The current cluster is composed of two PS4s running this custom software and runs a number of services including Nginx, Calico, Prometheus, and Grafana. For those with unused PlayStation 4s laying around this might be an option to put them back to work, but it should also be a cautionary tale about the hassles of configuring a Linux kernel from scratch. It can still be done on almost any machine, though, as we saw recently using a 386 and a floppy disk.

]]>
https://hackaday.com/2022/02/10/turning-the-ps4-into-a-useful-linux-machine/feed/ 22 519295 ps4-main
Yo Dawg, We Heard You Like Retrocomputers https://hackaday.com/2021/08/14/yo-dawg-we-heard-you-like-retrocomputers/ https://hackaday.com/2021/08/14/yo-dawg-we-heard-you-like-retrocomputers/#comments Sun, 15 Aug 2021 05:00:13 +0000 https://hackaday.com/?p=491368 The idea of having software translation programs around to do things like emulate a Super Nintendo on your $3000 gaming computer or, more practically, run x86 software on a new …read more]]>

The idea of having software translation programs around to do things like emulate a Super Nintendo on your $3000 gaming computer or, more practically, run x86 software on a new M1 Mac, seems pretty modern since it is so prevalent in the computer world today. The idea of using software like this is in fact much older and easily traces back into the 80s during the era of Commodore and Atari personal computers. Their hardware was actually not too dissimilar, and with a little bit of patience and know-how it’s possible to compile the Commodore 64 kernel on an Atari, with some limitations.

This project comes to us from [unbibium] and was inspired by a recent video he saw where the original Apple computer was emulated on Commodore 64. He took it in a different direction for this build though. The first step was to reformat the C64 code so it would compile on the Atari, which was largely accomplished with a Python script and some manual tweaking. From there he started working on making sure the ROMs would actually run. The memory setups of these two machines are remarkably similar which made this slightly easier, but he needed a few workarounds for a few speed bumps. Finally the cursor and HMIs were configured, and once a few other things were straightened out he has a working system running C64 software on an 8-bit Atari.

Unsurprisingly, there are a few things that aren’t working. There’s no IO besides the keyboard and mouse, and saving and loading programs is not yet possible. However, [unbibium] has made all of his code available on his GitHub page if anyone wants to expand on his work and may also improve upon this project in future builds. If you’re looking for a much easier point-of-entry for emulating Commodore software in the modern era, though, there is a project available to run a C64 from a Raspberry Pi.

Thanks to [Cprossu] for the tip!

]]>
https://hackaday.com/2021/08/14/yo-dawg-we-heard-you-like-retrocomputers/feed/ 20 491368 c64-atari-main
Lightweight OS For Any Platform https://hackaday.com/2021/03/22/lightweight-os-for-any-platform/ https://hackaday.com/2021/03/22/lightweight-os-for-any-platform/#comments Mon, 22 Mar 2021 18:31:18 +0000 https://hackaday.com/?p=466837 Linux has come a long way from its roots, where users had to compile the kernel and all of the other source code from scratch, often without any internet connection …read more]]>

Linux has come a long way from its roots, where users had to compile the kernel and all of the other source code from scratch, often without any internet connection at all to help with documentation. It was the wild west of Linux, and while we can all rely on an easy-to-install Ubuntu distribution if we need it, there are still distributions out there that require some discovery of those old roots. Meet SkiffOS, a lightweight Linux distribution which compiles on almost any hardware but also opens up a whole world of opportunity in containerization.

The operating system is intended to be able to compile itself on any Linux-compatible board (with some input) and yet still be lightweight. It can run on Raspberry Pis, Nvidia Jetsons, and x86 machines to name a few, and focuses on hosting containerized applications independent of the hardware it is installed on. One of the goals of this OS is to separate the hardware support from the applications, while being able to support real-time tasks such as applications in robotics. It also makes upgrading the base OS easy without disrupting the programs running in the containers, and of course has all of the other benefits of containerization as well.

It does seem like containerization is the way of the future, and while it has obviously been put to great use in web hosting and other network applications, it’s interesting to see it expand into a real-time arena. Presumably an approach like this would have many other applications as well since it isn’t hardware-specific, and we’re excited to see the future developments as people adopt this type of operating system for their specific needs.

Thanks to [Christian] for the tip!

]]>
https://hackaday.com/2021/03/22/lightweight-os-for-any-platform/feed/ 35 466837 skiff-os-main
Compiling NodeMCU for the ESP32 With Support for Public-Private Key Encryption https://hackaday.com/2019/01/03/compiling-nodemcu-for-the-esp32-with-support-for-public-private-key-encryption/ https://hackaday.com/2019/01/03/compiling-nodemcu-for-the-esp32-with-support-for-public-private-key-encryption/#comments Thu, 03 Jan 2019 15:01:15 +0000 http://hackaday.com/?p=338218 When I began programming microcontrollers in 2003, I had picked up the Atmel STK-500 and learned assembler for their ATtiny and ATmega lines. At the time I thought it was …read more]]>

When I began programming microcontrollers in 2003, I had picked up the Atmel STK-500 and learned assembler for their ATtiny and ATmega lines. At the time I thought it was great – the emulator and development boards were good, and I could add a microcontroller permanently to a project for a dollar. Then the ESP8266 came out.

I was pretty blown away by its features, switched platforms, except for timing-sensitive applications, and it’s been my chip of choice for a few years. A short while ago, a friend gave me an ESP32, the much faster, dual core version of the ESP8266. As I rarely used much of the computing power on the ESP8266, none of the features looked like game changers, and it remained a ‘desk ornament’ for a while.

About seven weeks ago, support for the libSodium Elliptic Curve Cryptography library was added. Cryptography is not the strongest feature of IoT devices, and some of the methods I’ve used on the ESP8266 were less than ideal. Being able to more easily perform public-private key encryption would be enough for me to consider switching hardware for some projects.

However, my preferred automated build tool for NodeMCU wasn’t available on the ESP32 yet. Compiling the firmware was required – this turned out to be a surprisingly user-friendly experience, so I thought I’d share it with you. If I had known it would be so quick, this chip wouldn’t have sat on my desk unused quite so long! 

Config and Compile

The easiest way to begin is to have Linux running either as your primary operating system or in a virtual machine like VirtualBox, with Git and Make installed. Our workflow will be cloning the NodeMCU ESP32 development branch, configuring the required compile options, saving the config, compiling, and flashing.

First, we clone the ESP32 git repository with git clone --branch dev-esp32 --recurse-submodules https://github.com/nodemcu/nodemcu-firmware.git nodemcu-firmware-esp32. This will copy over the source code required to build the NodeMCU firmware and all modules. Then make menuconfig will launch a simple menu-driven interface where you select the compile options and included modules. To conserve space and memory, it’s not recommended to select more modules than you need for a particular project.

The main menu. Overall it’s well organized and exceeded expectations.

The menu contains familiar options such as the default serial port, baud rate, flash memory details and so on. Notably you can set the CPU speed here, configure external RAM, and define how WiFi and Bluetooth will function. While most options can be left at their default values, there are a lot of interesting features worth exploring in the menus. You’ll almost certainly want to enter ‘component config → NodeMCU Modules’ and enable the high level features required for your project:

This is the menu interface to select which modules you need to compile support for, for example the DHT11 temperature / humidity sensor.

If enabling the libSodium module there, you may also want to enter the mbedTLS and libSodium menu entries under ‘component config’ and make sure it’s set up the way you expect.

Once we’re done, I recommend saving the configuration as something descriptive; I often find myself using the same groups of features and having a few frequently used configurations handy allows me to get working faster when I have an idea or want to do a quick demo. Better still, put it in version control with the rest of your code so that you know what it depends on.

Note that this means before compiling, you’ll need to run a command to copy your configuration of interest to a file named sdkconfig. Then run ‘make’ to compile. cp yourfilename sdkconfig && make

Once completed, plug in your ESP32 and run ‘make flash’ in the terminal. If your ESP32 is connected to ttyUSB0, it will detect your device and load the compiled firmware, otherwise set it right in the makefile. I found this last feature a real time saver – I expected it to return a file that I would than have to load to the ESP32 using another tool.

To test, I prefer to use ESPlorer. Run it, open the serial port, and hit the reset button on your device or in the GUI – with a little luck you’ll see the terminal output the result of a successful boot.

Stepping into Sodium

That’s it for flashing the firmware with libSodium built in. Let’s play around with the cryptography library a little by benchmarking it with a primitive proof-of-work algorithm. There are simpler benchmarks, but they were a little boring and I don’t require exact numbers:

-- Set difficulty (number of leading zeroes) and start WiFi (needed for cryptography), then generate a public and private key and print it
difficulty = 2
wifi.start()
public_key, secret_key = sodium.crypto_box.keypair()
print ("Public key for this session: " .. encoder.toHex(public_key))

function pause()
    -- Take a short break for the ESP32 to handle other processes
    tmr.create():alarm(500, tmr.ALARM_SINGLE, mine)
end

function mine()
    i=0
    while i < 60 do
        -- Generate a large nonce by concatenating 3 numbers from the PRNG. Not sure how fast it generates entropy, check this another day.
        i = i+1
        x = sodium.random.random()
        y = sodium.random.random()
        z = sodium.random.random()
        n = x .. y .. z
        -- encrypt the nonce using the public key defined in setup
        a = sodium.crypto_box.seal(n, public_key)
        b = encoder.toHex(a)

        -- If there are a number of leading zeroes equal or greater to difficulty, decrypt and print the nonce
        if tonumber(string.sub(b,1,difficulty)) == 0 then
            print ("Valid nonce found! " .. sodium.crypto_box.seal_open(a, public_key, secret_key))
        end
    end
pause()
end

mine()

This algorithm turns on the WiFi (required for cryptographic applications on the ESP32), and generates a key pair. Then it generates a single use random number, a nonce, with about 8 x 1028 possibilities, encrypts it with the public key, and checks if it starts with two leading zeroes. If it does, it decrypts with the private key and returns the original nonce. After 60 tries, it pauses for 500 milliseconds to allow other background functions to complete, otherwise it crashes. On the ESP8266, I might periodically reset the watchdog timer to let this run uninterrupted, but this was good enough for now.

Results come through at somewhere near three per minute, indicating we can try somewhere around thirteen nonces per second:

Don’t worry, I’m not going to create yet another cryptocurrency with this.

It did brownout and reset once in a rare while, which never happened to me with a similar algorithm running on an ESP8266. It also got fairly warm, so it appears I need a better power supply if I’m going to run this much computation on the ESP32! However, it’s certainly fast enough for the applications I had in mind, so while I might stay with the inexpensive ESP8266 for the majority of my projects, the ESP32 has more than earned itself a place on my workbench now that I know how to compile custom NodeMCU firmware for it.

]]>
https://hackaday.com/2019/01/03/compiling-nodemcu-for-the-esp32-with-support-for-public-private-key-encryption/feed/ 7 338218 ESP32_800x250
Complete guide to compiling OpenWRT https://hackaday.com/2012/01/19/complete-guide-to-compiling-openwrt/ https://hackaday.com/2012/01/19/complete-guide-to-compiling-openwrt/#comments Thu, 19 Jan 2012 20:57:44 +0000 http://hackaday.com/?p=65777 …read more]]>

Regular reader [MS3FGX] recently wrote a guide to compiling OpenWRT from source. You may be wondering why directions for compiling an open source program warrant this kind of attention. The size and scope of the package make it difficult to traverse the options available to you at each point in the process, but [MS3FGX] adds clarity by discussion as much as possible along the way.

OpenWRT is an open source alternative firmware package that runs on may routers. It started as a way to unlock the potential of the Linksys WRT54G. But the versatility of the user interface, and the accessibility of the Linux kernel made it a must-have for any router. This is part of what has complicated the build process. There are many different architectures supported and you’ve got to configure the package to build for your specific hardware (or risk a bad firmware flash!).

You’ll need some hefty hardware to ease the processing time. The source package is about 300 MB but after compilation the disk usage will reach into the Gigabyte range. [MS3FGX] used a 6-core processor for compilation and it still took over 20 minutes for a bare-bones distribution. No wonder pre-built binaries are the only thing we’ve ever tried. But this is a good way to introduce yourself to the inner workings of the package and might make for a frustrating fun weekend project.

]]>
https://hackaday.com/2012/01/19/complete-guide-to-compiling-openwrt/feed/ 9 65777 wrt-feat
Get the lead out of the Arduino compile process https://hackaday.com/2011/09/13/get-the-lead-out-of-the-arduino-compile-process/ https://hackaday.com/2011/09/13/get-the-lead-out-of-the-arduino-compile-process/#comments Tue, 13 Sep 2011 21:01:33 +0000 http://hackaday.com/?p=55605 …read more]]>

Relief is here from long compile times when developing firmware for your Arduino project. [Paul] was puzzled by the fact that every file used in a sketch is fully recompiled every time you hit upload–even if that file didn’t change. To make things more confusing, this behavior isn’t consistent across all Arduino compatible hardware. The Teensy has an additional feature not seen when working with other hardware boards in that it reuses previously compiled code if nothing has changed. It even tells you which files are being reused, as shown in the image above.

After the break we’ve embedded [Paul’s] video that walks us through the process of editing the Arduino IDE to reuse previously compiled files. It’s a one-liner addition to the boards.txt file. For example, if you’re working with the Arduino Uno all that needs to be added is ‘uno.build.dependency=true’. [Paul] had previously submitted a patch to roll this into the Arduino IDE source code, but it was not accepted citing a need for more testing. He’s asking for help with that testing and wants you to post your thoughts, or any bug information, on the new issue he’s opened regarding this feature.[youtube=http://www.youtube.com/watch?v=2dBF1ypQupM&w=470]

]]>
https://hackaday.com/2011/09/13/get-the-lead-out-of-the-arduino-compile-process/feed/ 23 55605 speed-up-arduino-compile