Buildroot: building embedded Linux systems made easy!

so good afternoon so I be presenting route a tool that elves building and Bitto Linux systems in an easy way and also probably one of the project that is the ugliest logo possible but that’s not really the point before we get started a quick poll when the room is either working or playing with embedded Linux so a good number of people like more than half of the room ok great so first I want mine quick few words I’m the CTO and a mid-atlantic’s engineer in a company called free elections we do embedded Linux stuff bus development and consulting and training we have a strong open-source caucus so we contribute to the Linux kernel for example to build root and we also release all our training materials and our creative commons license so we have lots of embedded Linux and Linux kennel training course material freely available for everyone to download I am an open source computer I contribute to a Linux kennel especially on the arm support for sending processors namely the ones forum from Marvel and also a major we do to build route to is in fact not 69 bit but nineteen hundreds patch is contributed since the the last few years and I come from Toulouse in France so quite far away and you’ve probably not just my terrible accent so embedded these days when people say embed it usually that they think something like that like phones tablets consumer-grade devices and those devices that typically have very powerful CPUs lots of RAM lots of storage a full-featured general purpose operating system usually Android but it can be something else but something with like many applications that you can install remove upgrade and so on so really a complete desktop operating system but in fact embedded is much more than just the devices that you have in your pockets we have embedded systems in like laser cutting machines point-of-sale terminals agriculture machines windmills and many other systems that you may never have heard of and those systems are much more application specific so they need more castelmola next system that may need to boot fast or meet some real time deadlines or have other types of constraints they usually have a lot less pour flow CPUs typically like in the few hundreds of megahertz is quite common in new systems they may use like specialized CPU architectures so there’s much more than x86 and arm in the indium edit world MIPS poor PC of course but also like black fin microblaze nyos and other bizarre architectures you probably never heard of these systems have also less Ram less storage and a long life time and maintenance period so the constraints are quite different than the ones that we have in consumer grade devices such as phones and tablets and so the way we’re going to create the Linux systems for those two types of embedded systems might be a little bit different as we’re going to see so to build an embedded Linux system to like integrate all the different components that make a Linux system in its system the the graphic goal stack and the network processes and so on there are different solutions that are available the first thing that everyone of us do is use binary or Linux distributions Debian Ubuntu or more specialized one like raspbian for Raspberry Pi or other distributions so those are trivial so what you get are binaries that have been compiled by others other people so the good thing is that it’s bolita available it’s easy to install easy to use but it’s quite large it’s not available for all architectures it’s not necessarily very easy to customize of course you can install and remove packages but if you want to like slightly adjust the config of one package or rebuild the entire thing with different compiler options it’s a little bit tricky and it generally requires native compilations so if anyone of you have tried to build a Linux kernel on a Raspberry Pi it takes a long long while so native compilation is certainly nice for some things but for some other things it’s not so nice so the other approaches the other side of the spectrum is to build everything manually so from scratch you take the source code for every component in your system take the Tarble configure it build it install it and do all that stuff but it’s quite hard because you have your to face cross-compilation issues you have to face the resolution of the dependency tree and in the open source world we reuse a lot of things though there are many libraries it’s not reproducible unless you take very well clear notes of what you’ve done and you don’t benefit from other people work so there’s kind of an intermediate solution that sits in between and of course as you might have guessed build which fits in that intermediate solution this intermediate solution is the usage of tools that automate the process of building a Linux system from scratch from source code all the way to a

completely working system so since we build from source and we can customize as we want we can build very small and flexible systems so we can just have what we need inside the system and nothing more it makes the process reproducible because we have a tool instead of just a bunch of command lines that we ran the true Lendl’s the cross compilation and dependency issues it’s virtually available for all architectures we’re going to see which ones build root support but of course it’s one tool to learn and you have to well spend some time waiting for the build to finish but at least you are not involved in into that so you can drink some coffee or other nice drinks while the build is going on so the principle of these tools is there are many tools in this well the area and but the principle of all the of these tools is more or less the same basically they take some source code either from open source components or announced components from there get trees or turbos on on the web and they are going to produce at the end root filesystem image that contains all your application and libraries and init scripts and config files and so on possibly can all image a boot loader image and a cross compilation tool chain and you’re going to fit into this tool a configuration that says for my system I’m going to build it for arm and I want this graphical library to be inside and this networking application and so on and so on so as I said you build from source or you have lots of flexibility you can adjust whatever configuration option for each individual components as you want since we’re cross compiling everything we are leveraging the power of our fast desktop and server so you can like get a big Angele machine to do all your cross compilation which allows you to build a channel for your Raspberry Pi in just a few minutes instead of a few hours if you do native compilation on the target and those tools have recipes for building components so it makes it easy instead of having to worry about how to combine xorg you just select an option and it’s going to do it for you because the tool knows how to build like so there’s a wide range of tools available to do that Yocto and openembedded protects this to build with LTI be open breaks open wrt and much more that i’ve probably forgot but from what I can see but of course I mean I have a biased opinion here I think true solutions are really emerging as the most popular ones on one side there is yolk going up and embedded and what these tools allow is to build a complete Linux distribution with binary packages so it produces like your own custom Debian for you and then from this set of packages you can create embedded Linux systems that are able to upgrade install remove applications through binary packages as you normally do in a binary distribution Mutual’s are very powerful but there are somewhat complexed and have a quite steep learning curve so if you want to invest quite some time there are certainly nice Bureau route is kind of the the opposite side of the spectrum it’s it simply builds a root filesystem image there’s no binary packages you can’t like it install upgrade remove things if you want to change something you have to go back into the tool adjust your configuration and recreate a new root filesystem image which is perfectly fine for most of the embedded Linux systems that are used in the industrial space and it’s also a tool that’s much much simpler to use understand and modify as we’re going to see throughout the presentation so in just a few words what are the main characteristics of built root it’s a tool that can build a tool chain root by systemic Anil bootloader it’s an embedded Linux build system it’s easy to configure because it uses a menu config X config and config config interface just like the Linux kernel itself so if you’ve ever built a Linux kernel you already know the configuration interface for built root it’s fast you can build a simple root filesystem in just a few minutes so you don’t have to wait for hours very easy to understand because it’s written in make and it is pretty good documentation of course it could be better but it’s we have good documentation I believe the default PI system they did that it builds it waits 2 megabytes so it’s when I say small I really mean small so it just contains small C library and which is used to Lipsy and buzzy box and that’s all and then you can add more stuff on top of it but at least you start with something small it has more than 1000 packages available so you can just select things and you don’t have to worry about how to actually compile it and there are many architecture supported it uses well-known technologies as I said it’s based on make so it’s entirely written in make files and it uses K config for the configuration interface so any like Kaenel developer or Linux developer in general knows use technology already it’s also a vendor-neutral tool so it’s not like one single company behind the tool doing the development or driving the directions it’s really an open source community doing it and it’s probably the oldest build system still

an activity it’s been created in 2001 actually so it’s quite all and the community is very active regular releases I’m going to talk more about that bit later so who is using block root Google is using blow routes they do Google Fiber in the US and the boxes it have been a metal onyx ystem and they use built route to produce it BA Rico is a well international company during visualization systems Rockwell Collins they work in the defense and aerospace industry and they also use build route in there or even contributing to it and we have many more I just took like three examples we also have a lot of process or vendors that are using routes as their BSP so they provide their customers a bill route that is specifically configured for their own architectures so like analog devices for Blackfin magi-nation technologies for mips marvel at mail for arm and there are several others and also many many Oh beast that do embedded Linux development on onboard like raspberry pie or Biggio Boeing black and so on fine bill root really nice because it’s easy and simple so do you get started with build routes you just grab it from the git repo well they are stable witnesses of course but it’s probably easy easier with the git repo and then you fire make penny config and you have the well well-known menu config interface you can figure all the different aspects of the embedded Linux system you are going to produce so I’m going to go through pretty quickly on the different well configuration aspects that are available so first you need to select the architecture and as you can see there are several of them that are supported including like armed 64 or super age or nyos 2 that are quite weird architectures and all the more common ones are available and then in this part we can also select more specific options specific to each architecture like the type of process or and what type of floating point strategy and so on there are some like build options where pew is going to download the tar balls how many things you do in parallel whether you want to see cache and that kind of things another very important part is the tool chain which cross-compiler you’re going to use and you have two options here either pure wood can build its own cross compiler in which end is going to build GCC and the C library and the bin and shells and so on and it supports you select CG Lipsy and eg Lib C has the different C libraries and all the other option is to use what we call an external tool chain where you already have a cross compiler that’s known to work really well for your architecture and you can just tell your roots to use it and the other advantage of using an external tool chain is that you don’t have to pay the price of building the tool chain which takes quite a while so you say like maybe 20 minutes to 30 minutes of build time if you use an existing tool chain and then you have way of doing some system-wide configuration like which in its system you want to use whether you want to use busybox at the init system which is the default because that’s very small or if you want to use system D or a more traditional system five minutes you can select how you want to end all slash dev like whether you want to use you deaf or deaf tmpfs they are very solutions and other well system wide configuration options like what’s the root password and whether you want a shell to run on some serial port and so on then you can define which channel should be built like which version whether it’s come from a git repository or whether there are patches which configuration should be applied so it can be given as a file or as a def config and we have support for the two probably most popular real-time extensions like RTI and genome I so it can be integrated easily involute as well then the most important part is definitely the target packages what you’re going to put in your root filesystem that’s where well the highest value of the embedded Linux build systems is and that’s where we have more than 1,000 packages so we have things like cute 4 5 X dot or gtk EFM so that means by that by simply selecting an option in the menu config you can get cute 5 builds for your target platform without having to worry on how you actually need to configure it and compile it and now to install all the stuff we have things like gstreamer ffmpeg many interpreted languages many networking applications we have since google Summer of Code last summer we have OpenGL support for various platforms and many many many libraries and mutuality x’ there are too many to name all of them obviously and another part of the configuration is is which file system format you want to generate so we support things like ext two three four of course but also more embedded specific file systems like ub ifs or j ffs to four flashes or chrome if well squash FS for read-only file system so this morning someone was asking during their lightning talk I’m having like five system corruptions on Raspberry Pi for some museum exhibitions and so on so

typically a good answer would be do something with squash FS and build root can by default generate a root filesystem that remains read-only so if it’s read-only obviously you can off-limits the well but the currency of filesystem corruptions going the wrong directions you can also build boot loaders so depending on which architectures you use and the boot loader will be different but we support grabs as links you boot per box and a bunch of platform specific boot orders so really you define all the aspects of your embedded Linux system there are also the best what you’re building some native tools that are useful for development but it’s kind of a a side thing and here is an example configuration so just like the Linux channel the configuration gets saved in a file named config which is just a text file it says for each option what’s the value so here is the configuration for the Raspberry Pi so it’s going to build a kale forearm platform with an external tool chain it’s going to include the Raspberry Pi VMware and New Zealand well proprietary binaries the I square sheet tools and then at the chippy head end streaming server and the draw bear SSH tamper and the light httpd web sampler and just with this configuration it was going to produce you directly an image that you can use on your Raspberry Pi so to start the build you will make and that’s the point where you drink one or two or three cup of coffee or tea depending on your taste and at the end of this process in the output images directory that’s where the low stuff gets put at the end of the build process you have the root filesystem image so here I’ve selected suppose a tarball format and ubi format which is forced for some embedded system you have the keynote image and you would put on your image of course it completely depends on your configuration but that’s really what needs to be flashed or installed to your target system like push to your SD card or flash in some way to your an embeddable next platform I’m going the wrong direction again so if you want to either look at the build output of what boot generates it’s all goes in the output directory by default but that can be changed using a weak one to do out of three build exactly like the channel and output contains output build where bloodroot is going to extract the source code for each and every component is going to bill so for GCC buzzie box and lighttpd and so on and that’s where it’s going to build each of them output asked is where build who installs all the hostilities so like when you build things for the target you need things for the hosts for example the cross compiler but there are many other tools that you will need so it’s going to install them here in output ask user the temple of your architecture so typically something like our unknown Linux like new your ABI sis root it’s where guru is going to install her the headers and the libraries that have been built from the target and it does the cross compiler to find them and build more libraries and more applications on top of the libraries that have been built previously that’s what allows to well solve the dependency tree output target each which contains almost the target with my system so why almost it’s because since Bill root does not run as root we cannot really create completely the root filesystem the permissions and the file ownership are wrong but we fix that up later when we create the real images so that this directory cannot be used directly at the root filesystem but it’s really almost like it and then as I said output images where you find the final images that are interesting so the build process is pretty simple of course if you go into the details it’s more complete bit more complicated but the the overall overview of the build process is quite simple bill root starts by coupling what we call the root filesystem skeleton to the target directory so the root filesystem skeleton is just like an empty root filesystem it is just like EGC and being and label to the empty just a bunch of config files in UTC and that’s that’s pretty much it then it takes care of the cross compilation tool chain and as I said there are two solutions either let build roots build your tool chain or use an external tool chain and pass cases but we take care of what’s needed to either build it or like import the already existing tool chain and then for each of the selected packages it’s going to first take care of its dependencies so it’s going to do this third step recursively for each of the packages and then for each of them is going to download the source code and potentially some patches extract them apply the patches configure it build the components and install it and it’s going to install it in different locations depending on the specific component you’re building for target application and libraries they are obviously going to be installed in output slash targets so they end up in your root filesystem and show up in your target platform at the end and the target libraries also are installed in the SIS route so that they

get seen by the cross compiler when you build more application and more libraries and so we’re saying and in output us that where we install all the native libraries and applications like to cross compiler itself but so also more and more things like I know if you need flex or bison to build something they are going to be installed in there and finally once that process is done for all the packages that you selected taking care of the dependencies and ordering the build properly it can i generate the root filesystem image and your system is ready to use so here are two real time ago real time real world examples of projects that I’ve done using brute so one of them was for a company making a device based on an arm Atmel CPU which was like 200 off for 400 yards maybe like 64 Meg’s of RAM so pretty pretty limited and in this device it was used to like track things going on using GPS rfid readers gsm modem Ethernet and USB and in the system we had like use the agile ipsy tool chain pretty pre-built so we didn’t have to spend time building it a Linux kernel fuzzy box for the basic stuff an SSH server and clients we had the basic cute library not the graphical part but all the core and some more components like PPP D for GSM interaction and some special RFID library and other things the cute application that was really making the system do whatever it was designed for and we generated the GF FS to file system and the file system was 11 megabytes in size with all these components and it was actually using gilepsy which could be made a little bit smaller if we used u s– ellipses and to build all that stuff it was taking 10 minutes on a well quad core i7 build machine which isn’t like it’s good but it’s not exceptionally well in almost in terms of CPU power and another one another one is a system that is based on x86 which is used for vehicle navigation system that mainly runs an OpenGL application showing what’s going on in the area so here we used a gilepsy tool chain that it was prepared with a tool called crustal energy which is specialized in the creation of cross-compilation tool chain so that again root doesn’t have to spend the time to rebuild the tool chain over and over again and reuse the group grab bootloader Linux kernel obviously and buzzie box it’s kind of always the same thing we had a large part of the stack and it was one interesting thing here is that this project was using an ATI card which is proprietary driver for the OpenGL implementation and those drivers were dependent on a very specific version of the kernel and very specific version of and thanks to the fact that we’re building everything from source it’s pretty easy to select very precisely which version of each component you want in the system which sometimes a little bit more complicated when you have binary distributions of course you could use like an older Debian but then everything is older while here we can have everything recent except just to the very specific libraries or version of the exit or example so the flexibility of building from sources sometimes very useful and then we add a bunch of libraries like as a staff and v4l and so on and so on and the OpenGL application and in the end the file system weighs 95 megabytes which is quite huge but if you look at what’s made this size there’s actually 10 Meg’s of app and 45 Meg’s of the API driver I don’t know what they do exactly no drivers and OpenGL implementation but it’s it’s crazy and this system was taking 27 minutes to build on a same build same word quad core i7 so it’s pretty reasonable I mean you of course you need to drink a little bit of coffee but not too much so besides selecting the different packages there are different ways you can customize the bill because well packages are interesting obviously but possibly you need to add more configuration files or adjust things in the root filesystem and there are different ways to do that pure route provides what we call bus bill and past image scripts so in the build process which I can basically go back here you can ask bill root to execute a script before point four and after point four which is respectively post build and past image so before the image is created you can like add more files in the root filesystem adjust configuration files do remove things and do whatever customization you want and after the images are created you can maybe Bendel them together to create a ephemeral image update that will be pushed to your customers or whatever that’s one way and we have another way which is called the root filesystem overlay it’s kind of related it’s basically you tell Bill root here is a directory that contains stuff and once the build is done just

take this stuff and copy it inside the root filesystem so you can put all your custom config files custom init scripts custom maybe even application or whatever in there and it’s going to be part of the build and taken by Bill root into the root filesystem and the last way you can use to customize the bill is by adding your own your own packages either to package open source components that are not yet handled in Bill root or your own in-house components because you need to integrate your own libraries your own applications into the build and so I’m going to show quickly what it looks like to create a new package in umbria root so first of all you have to make your package appear in the menu config interface and here thanks to the fact that we use the exact same code base as the kano for the k config base well it’s exactly the same as defining configuration options inside the can so any person having a little bit of Kol development experience knows how to do that if you don’t have a keen on development experience as you can see it’s pretty easy so here I’m showing the Lib micro httpd package so basically we’re defining a config symbol and naming the option that that’s what will appear in mini config defined dependencies so here this package has no specific dependencies except that the tool chain should provide fred support a description and then we have a little comment that says a your tool chain doesn’t have threads so you can’t enable lead micro httpd once this is done and obviously this should be done in a directory that’s named after the package so package slash leave micro httpd slash configured in once you’ve done that you need to go to the upper level can figure in which includes all the configure tins for the different packages and add a new include so it’s source and the name of the file we just created before so that’s the easy part and then you need to describe actually out to build that package so how to extract it out you configure it to build it out to install it but it turns out that like Lib micros utility uses the Otacon for not to make so the way to build it and configure it is very standardized so instead of repeating that for each and every package in bill root you have you can use what we call a package infrastructure which is named Auto tools package and which is going to factorize all these the gory details of building an auto tools by cache and do all of that for you so what we have to do here is just give a bunch of variables and values the version of the component where it can be downloaded from we can also give the license I’m going to get back to that later we are saying that it should be installed to staging this means that it’s going to be installed bus in the target and in the seat of the tool chain because it’s a library so if we want the cross compiler to see that library for upcoming compilations and then we need to down that and then we can we can pass configuration options to the configure script I’ve stripped down the real thing because here we end all things in the real bill root we handle things like optional SSL support for example so if we see that we have open SSL available we pass one more option just to to tell leap micro-ct PD please enable as a self support and that’s that’s all what you need that that will create the the package it will automatically download the tarball extracted and it will configure it build it install it so that’s so that’s all what you need 402 those packages so we have different package infrastructures depending on the type of package you want to create if your component uses auto tools it’s also tools package as I’ve shown if your component uses CMake then we have CMake package if your component is a Python module you either district use a set of tools we have peyten package recently that we’ve recently integrated and and then for all the rest we have an infrastructure called generic package where you have to do a little bit more work because you have to explicitly say what should happen to configure and build the package and it what’s coming is infrastructure for apparel and new web packages so we’re progressively improving those package infrastructures to make it simpler to package parallel new web stuff beyond building the system itself that you run on your target platform another thing that it provides is what we call a legal infrastructure so as you’ve seen in the example the the Lib Micro HTTP package is associated with licensing information like which license and which file describes the license and we have that for not yet all of the packages because that’s something we’ve introduced like less than a year ago but it’s growing and I think more more than F of our packages have licensing information so what we can do is that when you ran make legal info it’s going to extract the legal information for all the packages that you’ve selected in your embedded Linux

system and it’s going to provide you a licensing manifest in a CSV format saying all the components all their version all their licenses so that you can very easily do licensing compliance it’s going to store in one directory the license for all the different components and the source code for all the different components so if all of this is right technically you can just take all that stuff put that up on your website and you should normally be compliant with the open-source licenses that that you use in your in your system at least that’s the that’s the idea so that thing that’s a pretty cool feature for companies doing embedded Linux systems it well it uses the directly the embedded Linux build system tool to help in doing license compliance it can also do dependency graphing so that’s very interesting to understand why a particular component has been taken into the build so you do make graph depends and it generates a PDF that looks like that for the I think it’s the raspberry system that the configuration I was showing before so it’s I’m not sure you can read but we can see that that there is busy box it’s building the kennel and they can all needs a bunch of things it’s building light httpd which needs some other stuff and so on and so on so we can very easily understand why a component has been brought into the build another thing that that comes with roots our defconn things so it’s pretty much like the Kanan ones defconn flakes are configurations for popular platforms and in which we’ve decided to have configuration that build a minimal system for the platform so for example we have configuration for the Raspberry Pi for the BeagleBone for the cubieboard for penny board for many atmel boards freescale boards and also for many qmu configurations so that you can like experiment with your route and run things in QAM you arm qmm yips qme spark and we have many many of them like tens of them and it builds the most minimal system possible so like the two Meg’s Rufus temari’s well explaining before it is just to see ivory in breast box and that’s it and then on top of that you can add more packages depending on what you want to do with your system some people will use their Raspberry Pi to do a media center some other people will do it to will use it to maybe monitor the temperature in their house or something so the set of packages will depend from one usage to the other but at least do staff configs they build the right can all the right boot errors select the right architecture options and so on and give you a system that is known to boot on this platform so to use them you just do make bla bla dev config so like make Raspberry Pi defconn fee it’s going to load that default configuration and you start the bloom PI running make and it’s going to produce your system that runs on your Raspberry Pi as I said before blue which is an active project it is a quiet it has had a quite a quick history with periods of without any maintainer things going quite bad and so on but since no yeah five years now it’s beginning of 2009 we have a maintainer Peter gorg horse guard who lives in Belgium and he is the gatekeeper for all the changes going into Bill route so a bit like in the Linux kernel there is only one person having commit access to the git repository we review patches on the mailing list and so on so since this he has been leading the project we have published releases every three months very regularly they had always been in time so it’s pretty pretty good for users to know that well every three months we care about stability so we basically have two months of development and then one months for bug fixing we have a growing number of contributors these days we have between 35 and 40 different contributors each month that have patches integrated into the tree so we have in fact many more contributors sending patches but for the patches that are accepted is like 40 yeah 30 40 people each month mailing this activity is also showing that the project is growing gaining interest like um between one 1500 and twenty hundred emails a month and it’s growing and the number of packages also growing so as you’ve seen it it’s been flat for a while which was several years ago because we have had a huge period of clean up when the new maintainer was well appointed and we cleaned up a lot of things like we wrote most of it and now we’re in a phase where we can benefit from the cleanup and add many more packages which is what’s happening right now and we have physical meetings twice a year so here it was last year February in Brussels in the Google offices because since they use brute they are nice enough to

provide us meeting rooms and so we have like developer meetings and we’re going to have a gain one and Google offices in Brussels just next months in early February so if you want to try out your boots and there is a pretty good documentation as I say and that’s available especially describing how to add packages it’s very describe things in a lot of details and like any other good open-source project we have as I said the ugly logo but we also have the mailing list the RC channel a bug tracker I’m tempted to think that our community is quite friendly and welcoming I mean it since it’s a tool that’s pretty easy a lot of the people that coming are usually newcomers in the embedded Linux space and we try to be nice with them and explain them out of things work and so on and there are also companies offering commercial support around the Aleut so to conclude and leave some time for the questions I have like eight minutes I think it’s a nice tool and because it uses one known technologies and languages you don’t have to learn some funky language to use it and the community is active and friendly it’s simple if you want to understand what Bill root does it’s just a few hundreds of lines of code to understand they are not the easiest part of your routine gene but the core of the packet infrastructure is really just a few hundred lines of code so even newcomers after just a few months of like playing around with Bill root they are able to provide patches that touch the core of good because it remains simple and that’s something that is really key in the project whenever we have requests from users for new features we always try to see whether the request for that feature fits in the the simple model that we want to keep and whether it shouldn’t be handled as like an external script that should be called or something like that to avoid cluttering the tool with more and more use case that are being handled in in bizarre ways so we try to have like one way to do things and only one and not clutter with many features so sometimes it’s a little bit to find the right balance here but I guess it’s a challenge that many open-source projects are facing so it’s really easy to get started in just an hour you can have build your file system for the Raspberry Pi or any other board that’s supported and run it on your platform so it’s really easy and it’s pretty efficient the build times are quite reasonable as I’ve shown just use a reasonable machine if you use like VirtualBox and a very crappy machine of course it’s going to take a while but other than that it’s pretty efficient okay I guess it’s time for questions to ask questions please speak clearly and loudly I’m not a native speaker so if you like ramble I’m not going to understand yes please I said the true real world so the question is how many lines of configuration each of the the real-world examples I’ve shown required is that your question so if you take a dot config it contains many lines of code because it has the value for each option just like in the Linux kernel the one I’ve shown is the result of running make safe def config which creates a def config file that only contains the option for which you’ve chosen the non default value okay so that’s what makes them relatively short for those systems that I’ve shown basically just count the number of words more or less like the tool chains going to be you one option 1x scale is going to be maybe five six to select the right version and config busybox is one option drawbar is one cute is a bunch more like maybe five ten options each of the leave Olympics ml to log rotate TP BDS trays are going to be one option each the cute application is one so as you can see is going to be maybe like 30 40 lines or something like that and this one is a little bit bigger because it has but it’s going to be maybe 6070 lines of code of config but you don’t write it I mean nobody writes well I do write them from time to time but nobody writes that thing it’s that is generated by menu config so you just navigate in the in the menus here and select whatever you want just like when you configure your channel and you don’t have to worry about it I was wondering if there’s like a to list the things that newcomers should target they want to get into okay so the question is well first thanks for thinking the tool is great for newcomers and the question is is there a to-do list yes there are different

opportunities for newcomers to do things and they are there is we have a wiki on lnx org for blue roots and it has a small to-do list it’s not that big but it’s well quite there’s a few things we have reports from the previous meetings that always list a huge number of things to do and report the discussion that we add so for newcomers it’s an a good way of like seeing what are the topics currently in discussion and maybe jumping in some of the topics and then we have all the patches are recorded in a tool called patchwork and it’s anybody can look at them and maybe grab one of the patch test it review it and give feedback that’s something we’re missing quite quite a lot at the moment review of patches and then we also have a bug tracker but it’s probably a bit trickier for for newcomers to directly tackle bugs and then as usual just scratch your own itch and build a root filesystem for your device do things and you’re probably figure out that something should be updated or improved and things like that other questions yes please frankly my pal I wonder if you can steal from no we suspect so I’m being told that the recipes like the make files look very similar to the ones of the FreeBSD ports or Gen 2 recipes and so on yes oh and that’s the same for other embedded Linux build systems I mean if you look at locked open embedded pit exists and so on they all have this kind of recipient which and we don’t directly reuse what the others are doing as we don’t copy/paste what they’re doing but we look a lot at what others are doing and I believe they’re doing the same because I know some other build system are reusing some of the patches that we have and so on so there’s a lot of reuse but it’s not directly used because the syntax is different and the variable names are different and so on but there is a lot of reduce I mean I’m currently working on improving the Python cross compilation support I’m talking with a guy from canonical doing work on Python in Ubuntu and I’m interacting with other build systems also cross compiling Python so we’re interacting with other projects but we it’s more much more difficult to directly reuse recipes upsetting me down building sample so the question is well between internal and external what should I choose it internal seems to be seems to take quite a while to build so yes indeed internal is going to build inner tools then going to build all the dependencies of GCC then going to build Jessie like three times and build the she library so it takes a while I on my side I mainly use external tool chains and some of them I’ve produced with build root itself so I use big ones to build a tool chain and only that I keep a copy of it somewhere and then I use that that tool chain has an external tool chain inbuilt boot and I have a good number of them for each of the architectures that I care about so that’s typically the way I do things but I know some people stick with the internal tool chain because they don’t rebuild it every day you build it once and then you do more modification to your configuration and only maybe at night you’re going to restart the build from scratch so of course yes whenever you do a make claim is going to rebuild everything so you’ll have to rebuild the tool chain again so build it has a very simple approach to configuration changes it simply doesn’t track them so when you do a configuration change if you add a package is going to notice because well it’s not installed so it should be installed but if you like remove a package from the configuration it’s not going to remove it from your target root filesystem it other systems do that we don’t do it simply because it would add too much complexity to the core last question in the back so the question is can I use blood without a network connection and I will say yes but at some point you will need a network connection to grab the tar balls once but we have a lot of well mechanisms to help you do offline bills

so we have make source which is going not to trigger the build but only the downloads of each of the tar balls that are needed we have make external depths which is going to spit out the list of the the tar balls that you need to do an offline build and we can we store all of the tar ball in a specific location but you can also give what we call a primary site which is an HTTP server where the route will fetch tar balls instead of going to the web so like in your company you can have one internal HTTP server with all the tar balls that you care about and be completely isolated from the rest of the world so that’s a pretty common requirements in some companies so I think we support that quite well well thank you very much for your attention