Jan 162013
 

Over the years I have tended to specialize in a particularly focused areas of development. Until recently, this has primarily been in locking and scheduling, particularly with respect to multiprocessing and real-time.

Since I have joined Intel and been working on the Yocto Project, I have had to branch out quite a bit. From the Linux kernel side, I’ve updated serial drivers, forayed into accelerometers and industrial I/O, debugged x86 early boot errors in the VM, contributed to the start of an upstream modular Linux kernel configuration system, mapped out minimal configurations and tooling for whittling things down, as well as keeping an eye on some of what I used to contribute to and fixing bugs as they arise.

Outside of the Linux kernel, I’ve worked to enable EFI support of some new platforms, re-factored facets of image building and early boot, performed a similar minimal configuration exploration for user-space, fleshed out support for image generation using the ext[234] file-systems, and generally made a nuisance of myself to those who actually know what they’re doing.

While I miss the ability to truly focus on a particular problem and dig deep into brain-bending execution graphs involving multiple threads, atomic variables, and memory barriers, I also appreciate the value of an increased awareness of how all these pieces fit together to form a greater whole. I’ll continue to try and squirrel away some time to work on things I’m most passionate about, but overall, I believe this time spent on the Yocto Project has made me a better developer.

May 042011
 

With the soon to be legendary “roadmap” Thomas Gleixner presented at the 2011 Embedded Linux Conference for the PREEMPT_RT patch series, people are likely wondering what that means for projects like The Yocto Project which make the PREEMPT_RT patch series available as part of a larger integration project.

It is our goal to maintain the linux-yocto recipes and git repositories as closely aligned with the mainline kernel as possible. This primary goal sets the schedule for which kernel version will be used as the basis for the next major release. 0.9 released with 2.6.34, 1.0 with 2.6.37, and 2.6.39 or 40 are likely to follow. These won’t necessarily align with the supported PREEMPT_RT kernel versions.

We have a few options. First, we could port PREEMPT_RT to whatever the next kernel version turns out to be. Second, we could release a separate kernel tree just for RT. Third, we could just not include PREEMPT_RT when the versions do not align. There is precedent for the first option, and indeed this exactly how we provide a 2.6.34 PREEMPT_RT tree with 0.9 and 1.0. We took the third option for 1.0 and the 2.6.37 tree, preferring to stick with the 2.6.34 version for RT kernels rather than do yet another forward port to 2.6.37, duplicating much of Thomas’ work toward a 2.6.38 PREEMPT_RT. This is time consuming, error prone, and doesn’t contribute to the overall quality of the PREEMPT_RT patch series as it is somewhat apart from that which Thomas releases. The second option, a separate kernel tree, is contrary to another key goal of The Yocto Project, which is to reduce duplication of effort. A separate kernel tree would require duplication of the meta branch, and all the bug fixes, security fixes, and feature sets would have to be applied to each tree.

Going forward, our approach to PREEMPT_RT will be as follows. We will strive to align with the official PREEMPT_RT releases. When that is not possible, we will favor skipping RT support in a kernel version or two just as Thomas does with PREEMPT_RT. In very rare occasions, such as with the 2.6.34 kernel, we may opt to forward port PREEMPT_RT to align with the current linux-yocto recipe’s base kernel version.

Mar 122011
 

Work and family life was busy, so it was a few days before I could put the QNAP TS-419P+ to the test with some representative use cases. But before I did, I spent some time educating myself with RAID levels and came to the conclusion that until I am in desperate need of more storage space, RAID 5 just doesn’t make sense. Here’s why. RAID 5 distributes parity across all the drives in the array, this parity calculation is both compute intensive and IO intensive. Every write requires the parity calculation, and data must be written to every drive. With the low power CPU already the bottleneck for throughput, adding an additional load didn’t seem like a good idea. More importantly is data integrity. RAID 5 allows for a single drive to fail without any loss of data. However, while rebuilding the array, if one of the remaining three drives were to fail, all the data is lost. Rebuilding a RAID 5 array after a single disk failure is also very compute and IO intensive as every disk must be read in order to restore the blocks to the new drive.

A better option for a four drive array is RAID 10, a striped pair of mirrored disks. In this configuration writes only affect the two drives in the mirror, and the data integrity is much improved. After a single drive fails, it is restored by copying it’s sibling in the mirrored pair. If one of the three remaining drives were to fail, there is only a 33% chance that data will be unrecoverable as either drive from the other mirrored pair could fail without a problem.

The cost for this is total volume size. RAID 5 provides SIZE*(N-1) while RAID 10 provides SIZE*(N/2). With 4 1.5TB drives, RAID 5 yields a 4.5TB volume, while RAID 10 yields only 3TB. When drives were expensive, this 50% gain was significant, but when 2TB drives can be had for under $100, and larger drives becoming available every year (3TB drives now ship with consumer level NAS products), the principle value of RAID 5 is not as convincing as it once was. With RAID 10 and current technology (3TB drives) I can still double my capacity, and that should only improve in the coming years.

OK, on to configuration. I took to configuring the NAS for use with my home Linux network. I should preface this by saying I am not a network file system expert, not even an experienced user. I have setup NFS enough times to know the homogeneous uid/gid thing is a pain and that there are plenty of failing corner cases with respect to dropped connections, file locking, etc. QNAP claims to support NFS, so I expected them to provide the necessary tooling in their oft-praised web interface. The sad truth is that NFS appears to be an afterthought, and the implementation barely merits an “[x] NFS” string on their marketing material. The UI allows you to add users, but not to specify (or modify) the uid or gid. This means that standard Unix file permissions simply do not work, and their solution appears to be to make the shares globally read-only or globally read-write. Two words: cop out. Fortunately QNAP does provide a root-only ssh shell, and I was able to log in and manually edit /etc/passwd and /etc/group to make my users match the rest of the network. Some careful recursive ‘chmod g+s ug+rw o-rwx’ commands provided me with the permissions I wanted – but avoiding that sort of work is precisely why I opted for QNAP instead of building my own. In this regard, they failed miserably.

The SMB story is better. The QNAP UI supports per volume user and group permissions. While the on disk representation is still globally read-write, it’s not a problem as SMB performs it’s own user authentication and only the admin user has ssh access anyway. I tinkered with this enough to get it working with the GNOME desktop file manager and with autofs. This might be the best way to access shares on the QNAP, even from a Linux system. Still, something about running SMB makes me feel like I need to shower.

For throughput tests I used the rsync daemon to copy my MythTV recordings to the QNAP. This consisted of 300GB of mostly mpeg2 files from 2 to 6 GB each. I used the UI as well as top to monitor the system status periodically during the transfer. The CPU was pegged at 100% for the duration of the transfer, and it averaged just over 20MB/s.QNAP claims 45MB/s writes over SMB and 42MB/s over FTP. Rsync should be faster if anything. Throughput was a disappointment. Following the transfer the QNAP remained under heavy load (7-8), and became fairly unresponsive. Watching the kernel logs I found a few Out of Memory errors, with apache and php being among the OOM Killer’s victims. I raised the throughput and OOM issues with QNAP and my supplier. They weren’t able to suggest any changes to improve throughput or identify why the OOM occurred. They did agree to allow me to return the TS-419P+ in exchange for a TS-459Pro+. The latter replaces the ARM CPU with a dual core Atom, doubles the RAM, and replaces the 16MB flash with a 512MB DOM. 20MB/s just wasn’t cutting it, and a kernel OOM was just unacceptable. I shipped the QNAP TS-419P+ back and am impatiently awaiting a TS-459Pro+. Whether I keep the QNAP firmware or replace it with Ubuntu Server or perhaps a custom Yocto image is the subject for a future project.

Feb 252011
 

As if the arrival of the final components for Rage wasn’t enough tech debauchery, the present trucks also delivered a shiny new QNAP TS-419P+ and 4 Samsung Spinpoint F2 1.5TB drives. Devon helped me unpack everything and carefully mount the drives in their trays. He even helped me plug it in and start the initial setup process.

The QNAP packaging and physical documentation is simple, nay, spartan. Which I like. The device itself is smaller than I expected (always nice) but I was disappointed to find a separate power brick instead of a built-in power supply – this educed my excitement about the compact size of the unit a somewhat. The recommended “Linux Setup” was to connect a PC directly to the NAS and configure your networks to talk to eachother – this didn’t appeal to me, so I just looked up the QNAP IP on my dd-wrt router and followed the directions for Windows and Mac – just without installing a qnap finder application.

The QNAP web interface is highly polished. Initial setup included setting the hostname (I selected Toph in keeping with my heroine theme for my personal machines), installing the latest firmware, setting an initial password, which network services to enable, and an initial RAID configuration. Perhaps this is obvious to everyone else, but be sure to unzip the firmware you download from the QNAP website, otherwise you’ll just get an unhelpful error complaining the image is bad. I found the initial RAID selection to be odd as it is very limited. I chose RAID 5 as that is probably what I want to do, but the device offers a lot more options than a single RAID array using all the disks. Given the amount of time it takes to resync a 4.5 TB RAID 5 array – it seems like this step could be skipped and the user sent directly to the full-featured volume management admin screen at first login. Instead, after completing the initial setup, you are presented with this iTunes-wanna-be AJAX interface:

From qnap

Here you can see the volume management screen – and an ascending time remaining field in the Status column. I really don’t know how I’ll partition things up, or if I even need to. The QNAP offers a _ton_ of flexibility in how you access your data. I’ll need to spend a good deal of time considering them before I make a final decision. I’ll reserve judgement on these features until then.

From qnap

Out of the box, several network services are available for immediate configuration:

From qnap

And finally, QNAP offers add-on packages in the form of QPKG, which oddly enough includes an IPKG application for even greater selection of packages. There are several media streaming servers available, including one that is pre-installed. The installation process appears a bit cumbersome, requiring the user to download the package to a PC and then upload it to the NAS for installation. I am looking forward to installing Python, possibly Twonky, and maybe MySQL and WordPress (I’m considering moving this blog away from Drupal and to something else).

From qnap

So for now, my QNAP is resyncing its RAID 5 array. I hope to have the time to explore its many features soon, and I’ll share my experience as I do. My initial impressions are good, and I’m optimistic that this will turn out to have been a good choice for our needs.

Feb 242011
 
From Rage

The present truck(s) were good to me today. I received the two Intel Xeon x5680 CPUs, the two Seagate Barracuda 1TB drives, the Intel 160GB G2 SSD, and the second heat sink. The SuperMicro hot-swap trays don’t allow for mounting 2.5″ drives, so I had to mount the SSD in a 3.5″ bracket in a 5.25″ bracket. Lame. As I mentioned in my last post, the first CPU cooler’s fan conflicts with the rear chassis fan. Since I had to choose between the two, I chose to keep the larger (quieter) chassis fan, but I connected it to CPU 1 FAN instead of the FAN 5 header. This is a guess on my part, but I figure the CPU is first thing to get hot, and the most valuable component in the system, so it makes sense to me to let its temperature determine the fan speed. This may cause problems however as the fan speed used by the CPU 1 FAN is probably not appropriate for the larger fan, and I don’t know how removing the FAN 5 connection will impact how the system decides to use the forward fan (which is smaller, and louder). Any insight readers may have here is very welcome.

Initial power-on is always exciting, this was no different, perhaps more so. After pressing the power button, Rage jumped to life like a wild beast startled from slumber. Her fans roared and her many bright beady eyes flickered their discontent. After familiarizing myself with her BIOS settings, I ran a quick Ubuntu 10 install off USB (it was absurdly fast). The BIOS RAID options were confusing at best, and I felt I just might get better results with software RAID via mdadm (at least more control). Rage is currently resyncing a RAID 1 array composed of the two 1TB SATA drives. I’m not sure quite how long this will take, and with her periodic snoring (loud fan bursts), I may just have to force her back into hibernation so my better half can sleep tonight.

Just as soon as I can I’ll kick off a complete Yocto build and share the results. Following that, I’ll run some burn in tests to ensure the memory, CPUs, and HDDs are all functioning properly. I haven’t tested IPMI 2.0 support yet (remote access, KVM, etc.) I’ll get to that soon as well.

Feb 182011
 
From Rage

The first round of components arrived for my Yocto Project and Linux Kernel development system. I haven’t built a system like this (piece by piece) since I started using laptops in 2002. I had to learn all the new terms for all the same architectural bits. Spec’ing out the system was an interesting experience, and I learned something about categories at Newegg. Finding quality components can be a real challenge as you first have to sort through all the neon-lights-and-acrylic-chassis-viewing-window-crowd junk. But, there is a short cut – the term is “server”. It’s great, select “Server” to narrow the search for memory, CPUs, and especially cases and CPU coolers and all the teenage-gamer-consumer crap goes away and you’re left with no-nonsense computing hardware. The heatsinks were under “server accessories” and not “cpu fans”.

So first, the specs:

The machine will be put to a variety of uses, but most of the time it will be used for two things. First, as a build system for the Yocto Project. We build for four architectures, a variety of machines, and several image types. A typical build takes two hours (we are working on reducing that) and as my primary area of focus is the kernel, I try to build as many architectures as possible as I change things. Once built, these images can be tested in qemu. Being able to build these quickly and keep all the build trees around to facilitate incremental builds is important to keeping productive.

Secondly, I’ll use this beast to continue to work on the futex subsystem, parallel locking constructs, and real-time. When it comes to catching locking bugs or identifying bottlenecks – there is simply no substitute for core count.

When it isn’t busy with either of the above, I hope to use this system to build and test the mainline and tip Linux kernel trees.

Back to the assembly. For this stage, I only have the chassis, motherboard, and memory. I’m having to wait a bit on CPUs and disks. The assembly was straight forward, but I obsessed about airflow and cable management. Supermicro matches their chassis to their motherboards, so the usual time spent mapping and aligning every LED and switch connector was replaced with single ribbon connector – very nice. I still read through the manuals to make sure I was getting everything right. Turns out the motherboard has a built-in speaker where the manual says the speaker header should be, fine. There is some ducting to keep air flowing from the front of the chassis, over the motherboard, and out the back. I made sure I routed the SATA cables clear of that. Finally, the 665W ultraquiet PSU is not modular, so I had to find a place for all the cables I didn’t use while minimizing obstructions to airflow for the chassis and the PSU itself. Some careful bundling and a couple wire ties seems to have wrapped that up nicely.

I also discovered that CPU1’s fan conflicts with the rear chassis fan. I have a choice: I can remove the rear chassis fan, or I can remove the fan from the CPU heatsink (which was made easy by Supermicro). I’m somewhat disappointed in Supermicro here. This is their motherboard, with their recommended CPU fans, in their recommended chassis. Fortunately, the rear fan is immediately behind CPU1, and likely moves as much, if not more, air with less noise. If I do remove the CPU fan, do I connect the chassis fan to the CPU1FAN header, or leave it connected to the generic FAN5 header? I was pleased that both chassis fans and the CPU fans are four-wire fans, meaning their speed (and therefore noise level) can be controlled by the BIOS depending on temperature.

This motherboard support IPMI 2.0, meaning it has a service processor and a full graphical KVM. I’ll be running this system headless connected via two gigabit links to my home network. I was very pleased overall with the quality of the Supermicro components, they are a significant step up from what I’m used to seeing in consumer computing and while not cheap, they were not particularly expensive either. Only time will tell, but I’m becoming a Supermicro fan… er…. enthusiast.

Next time: CPUs, HDDs, RAID setup and benchmarking!

Jan 142011
 

I holed up in my cave with a Beagleboard XM rev B and spent the day learning how an ARM board boots. After a few hours of aimless wondering, misdirection, ill-conceived notions, and deep tangents, I am happy to say I managed to boot a minimal Yocto image on the Beagleboard XM!

beagleboard login: root
root@beagleboard:~# uname -r
2.6.34.7-yocto-standard

From dvhart's blog

The key difference (in terms of booting) between the Beagleboard and the Beagleboard XM is the lack of NAND. This changes the boot process slightly. Specifically, it requires an X-Loader and a u-boot binary on the MMC (as opposed to having them in NAND) – or something along those lines. I am thrilled that in a single day (not even a very long one) I was able to get Yocto booted on an architecture I have exactly zero experience with.

I will be spending next week tweaking poky recipes to generate the proper binaries and user boot scripts. The linux-yocto meta-data for the Beagleboard will also need to be updated for the new ethernet inteface on the XM. For those of you eager to boot Yocto on your Beagleboard XM, you won’t have to wait long!

And in case you are wondering how your Kindle can help you with Beagleboard development, here are two ways. First, it can read the Beagleboard manual if you email it to your kindle account. And second, its power supply with the proper mini-usb cable provides adequate voltage and more current than my laptop and is able to power the Beagleboard XM, while my laptop could not source quite enough current, resulting in a kernel panic, oddly enough.

Oct 282010
 



Shuttleworth’s recent announcement that Unity will be the default shell for Ubuntu 11.04 may have just tipped the scales in Fedora’s favor.

I’ve been a very happy Ubuntu consumer for several years. But as Canonical moves away from just being a fantastic integrator and tries to muscle its way into UI design, I find myself less and less content with their distribution. The recent themes and blurred graphics of the last two releases are far too garish (dark gray, purple, and orange – really?) to have general acceptance (just flat-out-dog-ass ugly in my opinion).

The windicators push has never made sense to me. Moving the window controls to the left to make room for the windicators…. uhm… what’s wrong with the space on the left? Are you too good for the left? I’ve also not heard of a single use-case for windicators that makes any sense at all. A shopping-cart? Really? Do you plan to get ebay, amazon, and the other 50 billion internet retailers to hook into it? If not, you’re left with yet another inconsistent user experience.

And now Unity. I blogged on my initial experience with Unity and am sad to say that things have only gotten worse. Part of the problem is clearly that it simply wasn’t ready to be part of an official release. The file-manager is useless, infuriating my wife as she tried to copy a file to her USB drive (requiring my intervention). The family has announced their dislike for the shell and a downgrade to 10.04 is on my todo list until such time as MeeGo gets multi-user support.

And how about quality? With all the attention on UI design, it is my opinion that the 10.10 Maverick release fell significantly shy of Canonical’s fairly stellar integration and QA standards. Since the upgrade to 10.10 both my ThinkPad x201 and our family Toshiba NB-305 netbook must have the nm-applet manually killed and restarted after a resume as the icon disappears from the panel, and it won’t reconnect to the wireless access point. My x201 suffers from some dbus error which I avoid by running the Lucid kernel.

Come on Canonical, you’re better than this! Your consumers expect better from you. By all means, expand your horizons, apply your vast resources to innovation, improve the Linux Desktop user experience, but please, don’t neglect what made you great – a polished distribution that brought Linux to the masses.

Oct 272010
 


The first question I received (and indeed the first one I asked a few weeks ago) regarding the Yocto Project was, “Yet another Linux distribution?” We certainly have plenty of those, adding one more would really only serve to further fragment the embedded space.

So no, Yocto is not a distribution (like Fedora and Ubuntu) and it is not a platform (like MeeGo and Android). The Yocto Project is a workgroup (as described in the Linux Foundation Announcement) and the core bit of software behind it is the Poky build system which has its roots in OpenEmbedded. So, the Yocto Project is targeted at making it easier to create your own Linux distribution. It also serves as an umbrella project to collect things like BSPs all in one place.

The ultimate goal is to provide a common collaboration point from which we can reduce the chaos in this space and make it easier for people to bring Linux based devices to market.

If you’re still not convinced, please take a look at the Quick Start Guide and even the Reference Manual which will introduce you to the sorts of things you can do with the Yocto Project.

Oct 272010
 


After weeks of barely being able to contain myself while being probed about what I’m working on at Intel (my new job as of about 6 weeks ago), I can finally share all the details publicly. We’re proud to announce the initial public release of The Yocto Project!

Check out all the buzz about Yocto:

I am personally working on the Linux kernel itself. I’ve spent the last few weeks learning the system, fixing a few bugs wherever I ran into them, and preparing the live demo for the CELF conference.