Pi GPIO on USB – It’s neat….

Pi GPIO on USB – It’s neat….

Just popped in a pre-order for RyanTech’s latest. It’s a board with a 40 pin Pi header, USB connector and software for Pixel, Linux, OSX and other operating systems which lets you drive it like a Pi’s GPIO header. There’s just so many neat things you can do with this card. It’ll let you hang that neat Pi Hat off your PC simply. It’ll let you double up your Pi’s GPIO capability.

It’s up for pre-order on Indiegogo after a successful kickstarter campaign and is looking well progressed towards getting to market. At £10 a board, it’s an easy pick – Me, I ordered five.

(And Happy New Year to all… lets see if I can keep this up 😁)

Here comes 2017 and…

Well, 2017 is days away and it’s time to make some decisions about this blog. The options are simple, shut it down and make a new personal blog in the style of Codescaling, or carry on here just reworking the site as my personal space. To be honest the latter option seems to be cheapest and most effective, so….

Codescaling is deaded, long live Codescaling.

I’ll be working on a new about page along with some new posts about what’s been in my recent makery builds (Z80 boards, Orange Pi, Linkit 7655 and more) and HackWimbledon bits and crosspostings from the day job at Compose and… well, we’ll see.

 

 

Just a note…

Yes, it has been quiet here. Things have been busy elsewhere and I’m in the process of reworking what and how I’ll be populating Codescaling. I’m currently leaning to talking more about the scaled down world, small systems and working with them. But its up in the air. So, reader, what do you want?

Numberwang with Linux 4.0-RC1

PenguinIt’s kind of hard to remember when Linux last had a version upheavel like the first release candidate of Linux 4.0…. sorry, no I tell a lie, it was 22 July 2011 when Linus finally pulled the handle on Linux 2.x and released Linux 3.0. That was quite a change when you consider that the 2.x version had arrived in 1996, with 2.6 turning up in 2003 and incrementing away all the way to 2.6.39 in 2011. The switch to 3.x has now seen 19 releases over the four years so switching version numbers up to 4.0 should be a no-brainer.

It was that 3.0 version change which woke people up from the Linux 2.x problem, where scripts assumed Linux versions began with a 2 and, lets be honest, it wasn’t really a problem. If you have scripts which are assuming 3.x version numbers on your Linux builds, find the person who wrote them and sit them down for a “conversation” because there’s no way that that kind of assumption is excusable after only four years. For 2.x, there was fifteen years of heritage, not so for 3.0.

Don’t read too much into the use of a poll to pick the new version number.

... after extensive statistical analysis of my G+ polling, I've come to the inescapable conclusion that internet polls are bad.

That’s Linus’s git commit comment as he turned over the version numbering and labelled the release “Hurr durr I’ma sheep” – the other option in a “Please ignore this poll” poll. “Who can argue with solid numbers like that? 5,796 votes from people who can’t even follow the most basic directions?” says Linus. The 4.0 vs 3.20 poll had a bigger turnout but the majority was so slim “it could be considered noise.”.

What’s in 4.0 RC1? It’s yet another incremental update of Linux. In his LKML posting, Linus points out his favourite features are ” actually some vm cleanups, where this release is getting rid of the largely unused non-linear remapping code (replaced with just emulating it with lots of smaller mappings) and unifies the
NUMA and PROTNONE handling for page tables”. For others, the live patching system thats being introduced may allow future kernel problems to be fixed without a reboot; here’s the commit.

Apart from that, a small typical update which would have passed relatively un-noticed if it had been a 3.20. So, it’s Linux 4.0 RC1 and that’s Numberwang!

Forking brilliant – Node/IO.js and Docker/Rocket

spork

What’s up with Node: So there’s been a fork in Node.js land with the appearance of IO.js. A group of core contributing developers have lost patience with Joyent, the developmental home of Node.js, and have set out to accelerate the development of the Async-JavaScript server side platform. This is the world of open source where people can vote with their time and effort.

It’s easy to see both sides of the fork. Joyent want steady, stable development as they move towards a foundationed, open-sourced release. That progress has been guided from within Joyent, as is their right, but it has ended up with a situation where old code, like an unsupported version of the V8 JavaScript engine, is still actively used.

The forkers wanted to move things foward faster. Some had been involved in a light fork, Node-forward, which was designed to make the enhancements and then offer pull requests to the Node project. But that wasn’t working for them. According to one of the better know users of Node, the fork has been a relatively polite affair in itself and most of the noise surrounding it has come from outside the Node developer community.

Which makes it all the more likely that this fork is going to be a good thing for the generality of the Node community. It’ll push both sides to compete on quality and progress and with commitments to compatibility from the forkers, the door is still open for changes to be backported. Of course, it could all go off the rails. Right now, we get to look forward to January 13, when IO.js will release its first alpha.

What’s up with Docker: Over with Docker, another case of long time contributers starting their own project has popped up. This time it’s all about containers. Containers in Linux let you run multiple systems off the same kernel. The problem was that LXC (Linux Containers) were hard work to set up and manage. Enter Docker in 2013 with an easy to configure and deploy solution to that problem. This was great stuff, bringing containers to more than just the pioneers who’d been harnessing them quietly.

It quickly started catching on and CoreOS contributed to the development by dotCloud, the original Docker company which eventually became Docker Inc, because they saw a use for a de facto standard container within CoreOS making app depolyment easy.

Time passed and as Docker Inc needed to grow it started a process of adding more and management elements to their Docker offering. Some of this was undermining CoreOS as they just needed a well matured container format to integrate with their server Linux. They weren’t happy with where development in Docker was heading and that it was bringing a big technical and architectural debt with it.

So CoreOS started building Rocket. Rocket isn’t a fork though; CoreOS started from scratch releasing a prototype to Github and specs for review. They started from scratch because one of their problems is what the see as the monolithic approach in Docker which they feel is counter a good security model. So rather than Docker tools talking to a single process and letting that do all the work, Rocket tools do the work themselves.

The company already is committed to Docker integrated into CoreOS and isn’t dropping it but it seems it wants to get building the foundations of a more secure container platform now, not wait till there’s an incident which blows out confidence. Rocket will notionally be done when it provides enough to create, package and run containers, containers defined by a specification which the Rocket developers created first. They hope that the spec will evolve and be implemented by others, including Docker.

Thoughts: These are two interestingly different splits. Both are powered by the force that powers most open source – enlightened self-interest. Both have the capacity to enhance the ecosystem that they are splitting from. And both are being created by developers who are already vested and have contributed, and probably will still contribute, in the the platforms they are splitting from. These have the potential to be sporks, splendid forks, if all parties are able to take as much as they give. Six months from now, both splits should have full releases and positions should be soldifying. How these things look a year from now is going to be very illustrative for open source in general. Just let me pop it in my diary now…

Oh hai there FreeBSD 10.0

BSD Beastie

Following up from the last post, here’s the FreeBSD 10.0 announcement. Listed highlights of FreeBSD 10 are – Clang is now the default compiler and GCC is no longer installed by default, unbound is now the local caching DNS resolver and BIND is no longer a default, make’s replaced with bmake, ZFS has TRIM support for SSDs and LZ4 compression, guesting under Hyper-V is now supported and pkg is default package manager.

The Release Notes offer up much more detail on the changes and there’s an errata for the open issues that persisted into the release. The release notes pick out features like the ability, on AMD64, to now address up to 4TB of memory, while at the other end of the scale, Raspberry Pi support has been added (though no easy to use images – see the wiki). One thing that you may note from the release notes is the number of userland components previously based on GNU software which are being replaced by BSD licensed versions – ar, ranlib, bc, dc, patch, sort and cpio. Find had already been replaced but has been updated to be more GNU cpio like.

Full ISO images are available at the project’s FTP server, but please, be a good netizen and use a local mirror (and follow the ISO-IMAGES- link for your system). If you are looking for a server-oriented Unix to add to your skill set, FreeBSD is probably the most useful destination – If you are new to it, check the Installation Instructions too. For those who sensibly verify their downloads, MD5 and SHA256 sums are at the bottom of the announcement.