codersnotes

The Danger Of Opinions September 3rd, 2017

Disclaimer

Warning: this post may contain opinions. If you are allergic to opinions, please try the associated reddit thread instead where you will be safe from them.

There's two schools of thought about how we should treat computing. One thinks programming should be about writing things to best reflect the truth of how they will be executed (C, C++, Pascal, Go, Rust, etc). The other thinks we should writing against some universal ideal, and the computer should just deal with that for us (Python, Ruby, Lisp, JavaScript, etc).

For years, MIT taught their SICP course using Scheme. And you know the weird thing about that? No computers involved at all. It was all just done on a whiteboard, using symbols and parenthesis. No registers, no instructions, no memory. It showed you what computing really was -- an abstract concept that isn't tied to any implementation. The idea that computing doesn't actually require a computer is somewhat alien to many native C++ programmers.

Then you've also got the engineering crowd. People like me whose first exposure to computers was that 8-bit home computer your Dad brought home one day in the 80s. I didn't grow up in a world of evaluation, expressions, and functions. The computer I had knew only about bytes, and how to move them about in memory. It was always about how things get done. What use were abstract concepts in a world where you needed to do specific things in order to see the results?

These two groups often fall under the banners of "static" and "dynamic" typing, and it's perhaps no coincidence. Static typing tries to tell the computer exactly what needs to be done, at the expense of moving the program further away from the abstract description. Dynamic typing expects the computer to figure things out, so that the human can just write things in a nice clean manner.

Which leads on to the ultimate question of programming: Should programs be written for the benefit of humans or for computers? Exactly whom are we trying to make it easier for?

It's a simple point but you see the repercussions of this appearing everywhere, hundreds of little design decisions that push programming further into two camps. UNIX, for example, demands a case-sensitive file system, on the grounds that the file system can be done more efficiently if its only concerned with matching bytestrings. Windows says that being able to create two files with the same name but different cases isn't useful to humans, and is only confusing. Why should humans have to keep track of where the capitals were placed, and why should auto-complete suddenly stop working because I forgot to hold SHIFT?

So which is right? Which is better? Should computers adapt to us or should we adapt to computers?

There's a book I love by Robert M. Pirsig called "Zen and the Art of Motorcycle Maintenance", which contains over 500 pages of pseudo-philosophical bullshit (oh who am I kidding, I still love it) centered around the idea of "quality". It's got this lovely disclaimer at the front where the author notes that the book doesn't really have anything to do with Zen Buddhism, and "it's not very factual on motorcycles either."

The central pillar of the book is what he calls the "classical" vs. "romantic" ideals. The classical, he says, is concerned with what something is and how it works. The classical viewpoint wants to know how their motorcycle works, how to recognize where that weird knocking noise is coming from, and wants to tune their engine to keep it running well.

The romantic viewpoint is instead concerned with how we see something. It's not important how something works, but how we see and use it. The romantic person wants to use their motorcycle to drive along beautiful mountain roads, and use it to get to far-away places.

The classical person sees a rainbow and wonders how it formed, and how the rain might reflect the sun like that. The romantic person sees a rainbow and wants to show others, and paint a picture of it.

This two-sided philosophy is found throughout the whole of human life, and especially in computers. One of the things I love about computer programming is that it's one of those areas where we actually get to use both at the same time, even within the same program. It's what makes a game developer want to be an artist or a programmer. And yet the game needs both to work.

So which is better? Well unsurprisingly, neither. You need both viewpoints, sometimes at the same time. And that's the weird part. How can two opposing ideas both be correct?

But they can.

I remember once talking to an artist friend of mine. We were talking about computer animation, and the subject of IK (inverse kinematics) came up. What puzzled me is that he wasn't a big fan of it. Now to me as a programmer, it seemed an obvious choice. Of course IK is a better way to do things. You just tell the arm or leg or whatever where you want it to go, and it automatically moves the elbows and knees and such for you. So surely that's less work, and therefore better?

But he explained things to me. You remember when as a kid, you drew stick men? Well in my mind that's how the human skeleton looked. But he explained about "clavicles", something I barely even knew existed but in fact drive the whole upper armature. And he explained how the best algorithm in the world isn't going to give you the results you want if there's more than one solution available. What had seemed a simple "pointing a finger" problem was unfolding into a world where you had to try and teach the computer how to be an artist. It slowly dawned on me that I didn't have the full experience of the problems he was describing, and couldn't make a case to argue back with.

It's weird when you suddenly realize that there's a separate world out there that you're not an expert in. It certainly changed my outlook on things. I think there's a lot of programmers who still haven't had that moment, and still live in a world where they believe they know everything.

Did that make me wrong about IK? Well, no. It's still useful. Did that make him right? Maybe, maybe not. But what it shows is that you can't have a discussion one way or the other unless you actually know a little about the other viewpoint.

Issues aren't black and white. And sometimes you can have two opposing viewpoints that are both valid. Programmers hate this. It's very un-pythonic.

Did you ever have an experience where someone you'd greatly respected suddenly said something you strongly disagreed with? Does that invalidate all the things they said that you did like? Do you stop talking to your best friend because you found out he voted Republican?

Complex issues can't just be simplified down into tribal arguments of us-vs-them, or solved by just shouting at the other person until they go away. We need to get over this cultural idea we have where anyone who disagrees with us is literally Hitler. It's OK to disagree with someone. And just because we disagree with someone doesn't make them wrong. It's possible for two people to disagree and yet they both still are correct. It's so common, especially in the media, for someone who changes their mind later on to be labeled a hypocrite. "But last year," they cry, "you said this thing. Now you're saying the other!". Yet the ability to change our mind is the most important thing we have. An opinion that is rock-solidly fixed in place is just tribal politics. Opinions should be swayable via convincing arguments.

On the one hand it's easy to look at things like the direction the C++ committee is taking and laugh; C++ has become an insane language that no one person has a hope of understanding. But they definitely seem to be heading towards a destination, perhaps if only by accident. What are they actually trying to achieve? A language where you can do anything, but only at compile time? Perhaps Python is a cleaner approach, by pushing all problems to runtime, but even they're now starting to realize that maybe type annotations are a useful feature.

But on the other hand, you've got languages like JavaScript, where they've just this year invented co-routines and are now treating it like the second-coming. And compiling source code into a binary form is still considered a great unsolved problem in computer science. And yet... there's still this "and yet" that hangs around with it. I mean can you imagine trying to write web-apps in C? At least JavaScript has a string type.

But we need both. The idea of the "one true" anything is bullshit. There will always be different sides, with different ideals. And that's fine. We need that. But what we don't need is us-vs-them. People need more exposure to different ideas. Programmers need to try out different languages. Webdevs could learn a hell of a lot from trying to write a Z80 program. And a lot of GPU shader guys could learn a thing or two from watching how Bob Ross can manage to paint a tree without knowing how sub-surface scattering works. Because let me tell you, however you've been doing things so far, there's a whole different approach that other people have been successfully using that you have no idea about.

I dunno where I'm going with all this. I just figured I'd write some of these rambling thoughts down, although putting your thoughts into words can get you fired these days. Probably best just to stay absolutely quiet and avoid doing anything that may or may not cause two opinions to form. We do have to be careful, you know. Sometimes we can create a difference of opinion so vast that the universe has no option but to bifurcate in order to accept both.

Written by Richard Mitton,

software engineer and travelling wizard.

Follow me on twitter: http://twitter.com/grumpygiant