Discorse

  1. In linguistics, a unit of language longer than a single sentence.
  2. More broadly, the use of spoken or written language in a social context.

This is the technical blog for my codeworks, a small consultancy in Sweden. We help companies with architecture, training, test driven development and agile processes. On occasion we get to code a bit as well :)

We do project rescue when something has gone wrong or gotten stuck. But generally we prefer it if we get a call before the large demo saw is needed to cut into whatever wreak the project has turned into. We excel at starting projects on the right track and keeping them there (probably since we have been on so many disaster commissions).

We are foremen in the software industry and this is where we write about the values we hold and why they are important.


Copy with style in Atom

Copy with style in Atom

Ever had to include code in a document or presentation? Ever wished there was an easy way to copy it from the editor with syntax highlighting? Well, now there is! Read on ...

Applying to Toptal

Applying to Toptal

A short post about why Jonas is applying to Toptal. Don't worry, he is not going anywhere! He is just looking for some new challenges to complement the current work. Have a read and you'll see what I mean...

Removing a team from the Mac Slack app

Removing a team from the Mac Slack app

Problems removing a defunct team form Slack under Os X? Is it returning like a ghost from the past whenever you restart Slack? Got constant new messages indicated because of it?

Fret no more, I'll tell you how to permanently banish that pesky, attention seeking remnant with a text editor and a few minutes of your time.

No better than the rest of us ...

So far so bad

This year has not been great when it comes to my single side project promise :/

I do really want to get going but right now I can't find the time how ever I look. I even stopped playing World of Warcraft, the last outpost of my free time, to be able to fit the rest of my life in.

What I have done

Not everything is a loss though. I have decided a fair few things for the project going forward. I'll release each chapter draft as a standalone ebook for example.

This means that you will all be able to read the chapters as the drafts are finished and the second drafts based on the feedback you give on the first drafts.

Hopefully this will help me create a book that is better tailored to its intended audience than I could on my own.

Chapter plan

I also decided to throw out the current chapter plan and create a new one from scratch. The first plan was just a bunch of points I jotted down and tried to organise into a reasonable structure.

Don't get me wrong, it is still all stuff I plan to cover, I will just try to fit the chapter plan more closely to the intended readers likely progression than to my, post-hoc, view of how it all fits together.

If you'd like to know more about the book and sign up to the email list there is a separate page for that here: Join the book update email list!

Does programming suck?

This blog post is a reply for Luis Solanos post "Why Does Programming Suck?" that you can find at Medium. The original is a very long read so I summarize it here for convenience.

The gist

Luis' main point is that computers and programming wasn't really meant for what we use them now when invented. He describes the origins and evolution, the increasing complexity of our technology stack and some problems he sees with the current state of affairs.

To some extent I do agree with the core tenet of the piece. I too think we are unnecessarily hampered by legacy in our current work and that modern development could be made a lot nicer/easier/cleaner than it is.

But the wholesale idea that programming has to be remade, hardware and everything, feels a bit flawed to me. It's like saying that cars, since they were derived from earlier forms of transportation liks horse drawn carrages, have to be remade from the ground up.

Sure, there are probably a few things we would change if we could start over with cars, mainly safety and environment related I'd think. But these things, the things we would change, though always present are never the same.

For cars there was a time when safety was not an issue. Environmental impact has not been a huge issue in the past. What will be an issue in 10-20 years I don't know, but that is the point, we can't know.

Gradual evolution of one form into another might not produce optimal forms at each step but is economical over time compared to starting from scratch every 10 years or so.

Biggest flaw

I can, and might in a future post, provide concrete examples of what I think should be improved and why. Luis doesn't provide any solution example or even scope, even though it sounds a lot like he wants to start over at the hardware level.

If he wants to start over at that level it would be nice with some preliminary model of how we could do hardware better? He says that it's pitiful that we don't have first class support for strings for example. I for one have a hard time seeing how one would support something like UTF-8 in hardware? I might just be limited by my many years of thinking in our current solution though...

Basically it boils down to that "discussing" or bringing up a problem and saying it needs solving, without providing a suggested solution, is just complaining in my book. Even if that starts to sound a bit too much like "get off my lawn" even for me :)

Specifics

I'd also like to address some specifics for the wrap up part of the piece that I think highlights some problems with it.

Software caused airplane crashes

How many more airplane or car crashes caused by software bugs do we need to convince ourselves that programming is a problem worth solving?

I know we have some really crap software in cars, and that is something we should REALLY get some regulatory body on right away. (Especially with the upcoming price wars on self driving cars. I bet not every manufacturer will keep to Google's testing standards.) But airplane crashes?

As far as I know aircraft software is in the category of having to be provably correct, which mathematically eliminates any possibility of a bug in the software. This only leaves human error as the cause, either by an error in the spec or configuration of the finished software.

As for the Airbus A400M crash: It's a military transport plane, not sure if they fall into the same category as public airliners. Also investigations show it's the configuration of the software, not the code itself, so human error ...

And for JAS, since I'm from Sweden, it was an experimental aircraft at the time. It might not have been the best idea to fly it over Sweden's most populated city during a festival, but still not release software so ...

Those are the only two incidents commonly quoted when googling this issue so I assume no passenger jet crashed and killed everyone due to a software problem. I think we would have heard of it.

Creating bug free software

If we invent better tools to create bug-free software that were easy to use, you could do twice as much for the rest of time.

So this one comes down to tradeoffs. We can already create provably correct software, it's done for airplanes, traffic control software, power plant control systems etc. But it is expensive, time consuming and exponentially so the more complicated the system you are trying to prove.

You can trivially create tools to solve trivial problems in a guaranteed bug free way. The more complex the problems the more complex the tools and finally, to solve arbitrary problems, you have infinite complexity.

It's just not mathematically possible, in the current paradigm, to do this and no indication that you can do it at all unless you reduce the primitives of the system to very few and restrict the syntax greatly.

It's the same problem you get when trying to create secure systems. To make the system secure you have to reduce its usability and user friendliness. Just think of two factor authentication. It's a great improvement over simple username and password, but it is an extra hassle.

Stop talking!

If we reduced the need for communication ...

Ok, but how? If we look at information theory we can prove that if you have a project that requires more than one person to execute we will have need for information exchange. We can theoretically create models for reducing the amount of information that needs communicating but we have several uncertainties here:

  • Who are you communicating with? Different persons need different information, even for the same task. Some back and forth is needed or you will have to give the most information possible to ensure the receiver can perform his/her task.
  • When are you communicating? Written documentation is for example communication with the future so it's impossible to know the information needs when you write the documentation. Again you end up with having to give the maximum.
  • How are you communicating? Different methods have different overheads.

So there is no way to eliminate communication and some forms of communication is very ineffective. We have also tried to minimize communication for decades now, with mixed success. So if Luis has an idea here let's just try it? It's not like it requires a new hardware design or anything.

Everyone is a programmer!

"Additionally, if we made programming so accessible that regular users could make modifications, you could spend less time adding and shipping features for only a small subset of users — they could do that by themselves."

Ok, first thing first: Normal users do not WANT to be programmers. I know us programmers can't really get this sometimes but ordinary users just want shit to work, WITHOUT having to hack or script or do any damn thing!

So they don't want to, and they really can't ... This is the whole complexity vs. usability thing again. For something to be easy to use it has to be simple. Complex tasks require complex interfaces. Programming, at least as it is today, is an infinitely complex task.

I think that the point that is missed, often, is that you can solve any problem in an infinite number of ways... It's like results in math; you want 42 (who doesn't :)), how many ways can you get to 42 using any formula on the left hand side of = 42? Any number of ways, trivially provable by the series: 42+1-1 = 42, 42+2-2 = 42 ... 42+x-x = 42.

So if you have something where you can model pretty much anything, or at least a VERY large class of problems, and you can model any one of these problems an infinite number of ways, how do you make that simple?

We are talking about reducing infinity to a small fixed number without losing expressiveness. I just don't think you can.

Solving the problem problem

Now from a qualitative point of view. Back in the day math was slow and error-prone and we created the computer. Now, it’s software development that is slow and error-prone. Back in they day math powered innovation. Today it’s software that powers innovation. Solving the problem of programming is the next logical step.

and

... the reversal of our problem-solving approach ...

That's like saying we need to solve problem solving. It makes no sense. It sounds good, in a politician kind of way, but what does it mean?

I agree that we still suffer from some of the reversal of direction that Luis mentions. Instead of solving a problem in the best way possible we model the problem so that it fits the solution, computers.

When you start looking at today's solutions you generally find that they are pretty good as is and wouldn't have happened without all the old, crufty hacks that they build on. It would have taken faaar too long to build it all from scratch in code, not to talk about trying to solve it with special hardware in every case.

I do get that if we had other means of solving problems we could solve other problems. But we are actively working on that as well. Quantum computers will allow us to tackle the realm of NP complete problems that we can't touch today.

But engineering a new solution from scratch for every problem is too expensive to even be considered and engineering a new general solution will only end up in the same place: good for some things, less so for others. It's the nature of the world that most things adhere to the no free lunch theorem (if some solution is optimal for one problem it's also the worst solution for some other problem).

Monkey AI

Imagine that we managed to make AI with the level of intelligence of a monkey––that would certainly be a huge technological breakthrough. Now imagine that we all went to play with the AI monkey and we completely stopped all development in the field of AI. You’d be pissed, right? You’d expect some people to keep working on improving AI because it could be much better, right?

Yeah, but that is claiming that no one has continued developing programming since the analytics engine? Which, reusing a metaphor from earlier, is like saying that a Bugatti Veyron is the same thing as a horse drawn carriage... This is simply not true, and not even what Luis claims in the rest of the article.

He is calling for a NEW look at the problem, like trying to solve the issue of transporting goods and people a different way from cars. So what we are looking for is the railway or airplane solution to the programming problem.

As pointed out in the comments to the original piece, and ironically Luis even links to "No silver bullet", it is ALWAYS about trade offs. I bet we COULD find another way of solving these problems, probably several other ways. But would it be better?

Is airplanes better than cars? Or trains? They all have their respective strengths and weaknesses, they all solve part of a multifaceted problem. But is any one of them better? I know Jeremy Clarkson thinks so, but for the rest of us?

I think this is part of the trap that Luis falls into. This is not ONE problem. Computers are STILL used to do math fast and without errors. They are used to run automated control systems, robots, factories, trad stock, play games, post on facebook, make and connect phone calls and thousands and thousands of other things.

This is clearly not one problem, it's a large number of different problems. The fact that we have on solution for all of them is pretty fracking amazing :)

It might be cobbled together, hackish and have plenty of faults. But we are also laying new chips, not on top of but next to older ones. Look at things like X.org replacing Xwindows to drop legacy support. Or musl libc that replaces the legacy libc. Nginx or httpd as an alternative to Apache. We are creating entirely new things on a very low level, not only paving over old crud.

I too think that there is a great many things we can do to make programming more enjoyable and easy, less error prone and more productive. I favour languages with an introspective runtime for that reason. I do not think we need to start from zero and I know we can't, it's just too expensive.

Only $0.02 as usual

There are many ways to look at this and without anything more concrete, like a proposed solution, to discuss it's hard to get anywhere.

But I did think it worth pointing out that many parts of the problem Luis talks about are either being actively worked on by a lot of people or they have deep limitation on their solutions.

Some things are, unfortunately, not possible in the real world. One of the amazing things about computers and programming is that everything seems possible there :) I know it isn't, but it hasn't dampened my enthusiasm for the craft of programming in 20 years. And I don't think it will.

One side project per year

I will finish my book

I just read the "One side project per year" blog post by Samantha Zhang and decided that I too have that problem.

So, not that I'm big on new year's declarations or anything (which is why I'm doing this in early December instead :)), I decided that out of all of the half finished projects I have laying around I would like to finish my book first.

What book?

I started writing a book about a year ago, or a little more, for everyone who has had the kind of developer education that is offered by most higher education institutions. That is long on theory and fundamentals and short on actionable advice for anyone who takes their job seriously.

So it's a beginner's manual to craftsmanship basically. In the first part I handle some of the reasons for why code quality is important and why we should care about delivering value etc. This is already a completed first draft with some feedback and I have even started fleshing it out in a second draft.

The second part is going to be more about concrete techniques and tools, like TDD, BDD, refactoring etc, all reconnecting to the ideological foundation from the first part.

I have set up a mailing list for the book if you want to get updates on the book itself. Here you can find a form for signing up.

Also feel free to subscribing to the blog via RSS if you want to keep an eye on the "One side project per year" side of things :)

CSS3 proven to be turing complete.

If you got here first you should really check out part one of this blog post that you can find here Duty calls - CSS3 is not proven to be turing complete. It's a bit longer but handles a lot more of the background to the things in this piece. This was really just written as a short followup to the first one :)

CSS is actually “Turing Complete”

Eli & Jonas

In early 2011, Eli presented an example of CSS and HTML simulating Rule 110 (which is Turing Complete) at a Hack && Tell event. It spread widely online, from Wikipedia’s article on Turing Completeness to Professors’ websites to blogs, reddit, Q&A sites, and YouTube. This post shows that the original is not Turing Complete, but presents a modification to make it “more” Turing complete than C. The new version is here and details are below.

Turing completeness captures the idea of universal computation. Many very different systems for computation have been shown to be capable of computing the exact same things. The canonical system for computation is the Turing machine: a set of rules governing the behavior of a read/write head on an infinite reel of tape, each cell of which holds one letter from a fixed alphabet.

You can write a simple Turing machine simulator in C. Doing this informally shows that C is Turing complete. But C without I/O isn’t really Turing complete because any implementation is required to decide on a pointer length and that limits how much memory can be stored. So how do we cope with this? One might call this a linear bounded automaton or some relative.

A priori, there’s no limit to the size of HTML documents, but thinking about infinite HTML becomes problematic. For example, with the following CSS, which of an infinite number of divs would be visible?

div { display: none; }
div:last { display: block; }

There are two solutions to this: we can lower our expectations and show CSS is as computationally powerful as C (without I/O) or we can work under a streaming model of HTML.

Assume that the amount of HTML currently loaded is finite but sufficient for all of the state to be properly rendered. Imagine an old-school movie projector changeover system--when one reel is running out, you seamlessly swap to the next. In the world of Turing machines, this would be equivalent to assuming your tape is of finite length but whenever you try to read off one end, someone quickly comes over to the machine and splices in some more tape. For C, maybe after each instruction is executed, the pointer size is magically increased.

The “CSS/HTML machine” we’re implementing consists of a fixed, finite amount of CSS3, along with an infinite (subject to above discussion) amount of HTML broken down into contiguous blocks of characters (analogous to the symbols in a Turing machine), each of which will simulate a Rule 110 cell. In this way, the HTML resembles the tape of a finite automaton or the grid of Rule 110.

Each step, one enforces all the rules of CSS according to the CSS3 specification on all of the currently-existing (finite) DOM. Then the DOM is grows. Finally, a finite, predetermined sequence of mouse and key events are applied to the DOM, updating :focus, :target, :checked CSS pseudo-classes.

In the original source code from a few years ago (see e.g. here), there’s some +*+*+*+ nonsense. This is used to target a cell in the subsequent row. Unfortunately, this means that the size of the CSS (in bits, say) is proportional to the number of columns. Informally, this means that if you’re allowed, say, 10000 bits of CSS, you can never handle, say, 101000000 columns. In particular, all languages that this method of encoding of a Turing machine could recognize can be recognized in PSPACE. This makes Eli’s first example significantly less powerful than Turing machines.

Here’s the gist of fixing it: use anchors and the corresponding “:target” CSS pseudo-class to know what cell to update next. The same style of tabbing order hacks are used: elements with display:none are skipped when pressing tab to jump between checkboxes or links. You can play with it here and check out the CSS here.

Duty calls - CSS3 is NOT proven to be turing complete!

Duty calls - CSS3 is NOT proven to be turing complete!

I keep running in to posts, comments, articles and even videos about CSS being Turing complete and they all cite each other or the same original source, Eli Fox-Epsteins HTML/CSS Rule 110 automaton.

The thing is that none of them seem to have talked to Eli or read the rules of turing completeness. Well I have, and all thous posts are wrong. Read on and I'll show you why :)

On the simple task of generating random numbers, part 1

On the simple task of generating random numbers, part 1

As the lead developer on Replay pokers new poker server I have the distinct pleasure of learning just how little I know about random number generation. Or rather "just how little I knew" since I have recently had the opportunity to "wise up", as i where, and thought some of my inadequacies and errors of thinking might be educational to you as well.

Craftsmanship - Roundup

Craftsmanship - Roundup

Need some perspective on the craftsmanship series? Try this! It gives you links to all the articles as well as a short description of each and some links to further material by the author.

Craftsmanship - Part 5

Craftsmanship - Part 5

In this, last, installment of my blog series about the fundamentals of building good software we will talk about how to keep your good code good, refactoring. Refactoring is to change the structure of the code without changing the computational result of the code. In practise it is a large set of transformations, both concrete and abstract, that you apply to your code regularly to keep it in shape.

Craftsmanship - Part 4

Craftsmanship - Part 4

As promised in part 3 of this series we are going to talk about architecture today. More specifically we are going to talk about why it matters as much as what it is.

Craftsmanship - Part 3

Craftsmanship - Part 3

We spent a fair bit of time talking about the Agile movement and it's, in my opinion at least, virtues. I gave a very, very short intro to TDD and referred to the software craftsmanship movement more than once. Today we will spend some time talking about why we want it, why the agile movement has failed, in some respect at least, and what the software craftsmanship movement is all about and if you should care.

Craftsmanship - Part 2

Craftsmanship - Part 2

Today I thought we’d handle the two remaining points on the Agile manifesto. We’ll talk about the import of good people first and then turn our attention to the elephant in the Agile room, test driven development.

I will give some evidence as to why you should choose your job based on people and not tools or salary. And give a “quick” overview of TDD with a slightly evangelical finish. Apologies to those that are tired of evangelical TDD:ers. You might ask yourself why we are so many though 

Testing the DOM in JavaScript

Testing the DOM in JavaScript

Ever had that sinking feeling when you look at your QUnit test suite and see all of that templated HTML in there to set up parts of the page for testing? Which there was a better way? Well, not unreasonably, there is! Read on ...

From concept code to finished gem

From concept code to finished gem

Another "how to make a gem" tutorial. This time with a real example, rspec-simplecov, from start to finish.

Failing an RSpec suite on poor code coverage.

Failing an RSpec suite on poor code coverage.

Ever wanted to have your RSpec suite fail when the code coverage with Simplecov was too low? Now you can, using some pretty clean RSpec internals.

Craftsmanship - Part 1

Craftsmanship - Part 1

This is the first part in a series of blog posts that I’m going to write about Agile, Software craftsmanship and how to write better, maintainable software.

I’m going to talk about what’s called software craftsmanship. A collection of ideals, techniques and processes that help us build better, more maintainable software in less time and that produce a test suite, a more cohesive team and better customer relations as byproducts.

I will try to concentrate on the why’s as much as the how’s. Mostly this is an aspect that is missing, strangely, especially in introductory material such as this.

The pain of DHH

The pain of DHH

The recent, ehrm ... attacks?, on TDD by David Heinemeier Hansson has stirred up quite a few responses from people like Robert C Martin, Martin Fowler, Corey Haines and Gary Bernhardt. While I might lack the, professional, weight of these gentlemen I think there is more to be said in the vein of Gary Bernhardts response. So here it is, my take, filled with facts, rhetoric and points.