131

Holding a Program in One's Head (2007)

> You never understand other people's code as well as your own. No matter how thoroughly you've read it, you've only read it, not written it.

There is certainly some truth to this. On the other hand, it's possible to become blinded to defects in code you've written yourself. You see what you intended for the code to do rather than what it actually does. Reading someone else's code, it can be easier to see what's really going on, since you just see what's there.

2 months agonickdrozd

The comparison with mathematics also makes sense here. It’s much easier to spot typos in other peoples’ work than your own for exactly that reason: when you read back what you wrote, you read back what you meant to write rather than what’s actually there.

Open any textbook (even a fourth edition of a famous one, written by an expert) and you’ll find countless typos. In fact, in a certain sense, the more expert you are the less suitable you are as a proof reader for such books.

2 months agoxanderlewis

One of my undergrad tutors taught complex analysis with a book she had written, and she offered a reward for any one who found an error. She said the best students never claimed the reward, only the people that had to study each word carefully.

2 months agolanstin

I proudly study each word carefully.

2 months agoxanderlewis

You touched on it but I've experienced the same reading code when I knew what was intended by the code, regardless of who wrote it. I miss obvious but simple bugs because I only read what it is mean to do.

2 months agojpc0

I often tell younger engineers that the human brain is the slowest, lowest-memory, and most error-prone runtime for a program. If they're stuck trying to figure out a bug, one of the most effective things they can do is validate their assumptions about what's happening, because there wouldn't be a bug if everything was happening exactly according to expectations.

2 months agosaghm

> wouldn't be a bug if everything was happening exactly according to expectations

This isn't quite true, especially concerning distributed systems. It's relatively common for a software system to be broken by design. It's not that the developer didn't know how to use the programming language to get the computer to do what they want. It's that what the developer wanted reflects a poor model of the world, a logical inconsistency, or just a behavior which is confusing to users.

2 months agoalphazard

Keep in mind I said that this is advice I give junior engineers specifically; they shouldn't be the ones responsible for designing distributed systems in the first place. For someone in that part of their career, this advice is meant to help to learn the skills the need to solve the problems they're dealing with, and it's not intended to be universal to all circumstances, just a useful thing to keep i mind.

2 months agosaghm

That sounds distinctly like an expectation that didn't hold.

2 months agomonocasa

"a poor model of the world, a logical inconsistency, or just a behavior which is confusing to users" I expect when I pull from the queue (but it was designed non-atomically) that I will be guaranteed to only grab the item once and only once, even after a failure. That expectation is wrong, but the developer may have implemented their intent perfectly, they just didn't understand that there are error cases they didn't account for.

2 months agoStefan-H

That’s why I learned to log literally everything into stdout unless a process is time-sensitive and it’s deep production and it passed the mark where bugs and insights occur once a month+ and there’s zero chance someone asking me what exactly happenes with X at Y afternoon-ish last Friday.

The obvious exception are recursive number-fiddling algos which would spam gigabytes of output due to big N.

This way I can just read assumptions and see branches taken and what’s wrong as if it was written in plain text.

When I see klocs without a single log statement, to me it’s readonly and not worth touching. If you’re stuck with a bug, log everything and you’ll see it right there.

2 months agowruza

For large systems the cost of maintaining all of those logs in a searchable system can be prohibitive.

2 months agojimbokun

Just reduce the time horizon you keep the logs until you can afford it. Also, as he mentioned, once a system is getting bugs infrequently, you can lower the log level. My standard is to have a log msg for each branch in the code. In C, I would use macros to also have a count of all the fmt strings the log package encountered (so I still got a sort of profile of the logic flows encountered, but not have the sprintf expense), but I haven't figured out an efficient way to do that in Go yet (i.e. not using introspection).

2 months agolanstin

This is one of the reasons I think that the push towards server-side UI is misguided. It's much easier to walk through the runtime of a program running locally than it is to step through a render that's distributed across a network.

2 months agonfw2

You’ve clearly never used Elixir/Erlang.

2 months agosodapopcan

You've clearly never built a complex UI.

Ad hominem arguments don't land so great, do they?

2 months agonfw2

Oh lordy, I'll bite.

I was responding to your blanket claim that server-side is misguided in general.

I have no idea what you consider complex, but most people pushing toward server side UI are not advocating that it's a one-stop solution, just that it simplifies a large majority of situations that many of us are in which is building CRUD apps. You can even get pretty complex, like say an email client though at that point we're in a grey area where you could kind of go either way. If we're talking something like building PhotoShop in-browser or even a calendar or gantt chart (which I have worked on) then, no, I would not personally advocate server side and instead use a good client-side view library.

The Elixir/Erlang comment was that it makes server-side even easier as you can hop into running production systems and debug them.

2 months agosodapopcan

I, for instance, learned years ago to refuse to help people debug promise chains in languages with async-await semantics. Rewrite it and then get back to me if it still doesn’t work. They usually transcribe the intent of the chain and not the letter of it, fixing the bug they can’t see. And if not the error makes enough sense they can figure it out.

2 months agohinkley
[deleted]
2 months ago

I believe that is the source of "the worst code I have seen is code I wrote six months ago" feeling. It has been long enough that the code has fallen out of your memory, making you less blind to the defects. You can now see it as an outsider would see it.

2 months agostonemetal12

I don’t think that quote is true in all situations. I have worked with a couple developers long enough their code reads to me as easy as my own.

2 months agoHumblyTossed

Sometimes when I do code reviews, I understand other's code better than they do and find bugs they don't see. So yeah.

2 months agoahoka

There's some idiom that says something like "you don't understand something if you can't explain it." I think this is the real point of code review. To make a case for why your code is valuable. If it's just a blob of 1000 lines of "refactor" or "adding feature." It means nothing. A good commit has some kind of abstract tailored to what work was done.

Then a review becomes something like "the claim was made this function does this, does it look like the function does what it says it does?" If you can understand it in context, then you've added trust and knowledge of how it works.

However it seems to often be the case a review turns into "I like this word better than that word" so back to the explaining it point, it becomes a bit of a self review with the hope it might be helpful to somebody else in the future.

2 months agojmkr

There is a balance there. You can have 1000 trivial commits making it hard to see the fact you just have 10 features, and ten mind-breaking commits making it hard to see what each feature entails. (Then there's the 50 commits "try this" where you are fighting CI or CD and can't test without going thru a git commit/push.)

2 months agolanstin

This is true and it's one of the problems I have with code generators and these days, AI generated code.

2 months agonoufalibrahim

Nothing beats the experience of looking at code you wrote a year ago and thinking “who the hell wrote this nonsense”

2 months agoiamflimflam1

IMO this is one of the most important skills for a developer to have. In an age of Github Copilot and similar systems, it's both far more viable (because, per OP's #1, you won't get distracted by the "side quests" of implementing utilities you need, when you can just tab-complete to get them) and far more vital (because with that productivity increase comes an increase in the complexity of problems you'll be asked to tackle at any given level of seniority).

My advice on this would be: never be afraid, and even force yourself to, follow the chain of how a certain function is implemented in third-party libraries you are using. Set up your IDE to be able to click into functions, make that muscle memory, and perhaps even have a dedicated project you can bring up for your site-packages or node_modules. Don't just rely on documentation, see how things actually work a few levels deep. You'll gain a deeper understanding of the code you're using, as well as incredible practice for when you need to do this on an existing first-party codebase.

Oh, and if you can, get one or more large 4k monitors, and split your IDE into 4 quadrants! It's certainly possible to hold a codebase in your head on a small laptop screen, but being able to see the code you're working on alongside its dependencies and dependents makes this far easier!

2 months agobtown

This is one of my standard interview questions, how do you familiarize yourself with a multi-million line code base with partially correct partial documentation (in addition to generated documentation). Looking for interesting tooling, and stepping thru the framework code to see how it all comes together (and enough concrete details to verify the claimed work history).

I'm not sure about the advantage to seeing all the code at once for understanding it. Surely for refactoring or massive editing, but to encode it into your brain, I feel the crucial thing is more abstract than actually seeing the code.

I personally find the large screen is something that's more useful when I am moving between different low-concentration tasks (confluence, slack, teams, emacs, whatever) but for deep problem solving/thinking, just 1 window I can see everything completely with a full-screen emacs is more conducive. I assume other editors than emacs would have this same facility. Extremely rapid answering of different questions about the code via LSP type facilities is more key than seeing a bunch of text at once, so I can follow the flow of my thoughts. Sometimes, when I'm trying to make sure that 2 or 3 functions have the same flow/logic (yeah, haven't found the abstraction to replace them with) I'll want a big side-by-side comparison of 2 or 3 places at once.

This ability to follow into third party code you use was one of the initial attractions of Gentoo - you can build all from source, keep source and symbols around, and edit/trace the code for anything on the system. Reading a lot of code I think makes my own code better.

2 months agolanstin

Any suggestions for a nice 4k 144Hz OLED?

2 months agointelVISA

> […] bottom-up programming, where you write programs in multiple layers, the lower ones acting as programming languages for those above

I like to explain this as “hide the bad parts behind a good API”. Anything interesting is going to require “bad parts”, which just means the low-level, difficult work. From there, compose up and up until a high level orchestration is achieved. It works so much better than the bad parts being distributed everywhere! That’s what you’d also call a “leaky abstraction”

2 months agocorytheboyd

I think that many, perhaps even most, engineers incorrectly believe that the main purpose of abstraction in code is simply DRY, as if the goal is to save keystrokes or something.

In my view, the purpose of abstraction is to compress the concepts of your application into digestible and manueverable chunks, and DRY is just a heuristic for beginners to follow to help point to where appropriate abstraction boundaries may be.

I hope the various theories behind what constitutes good code will make their way out of scattered blog posts and into CSE curriculum.

2 months agonfw2

DRY is an okay rule at the absolute beginning of your programming career, but it must become obvious why it is too simple for you to advance much further. DRY is most certainly not abstraction.

I actually took a general Software Development course when I was in school that sort of touched on things like Linux, source control, debugging, and other things I now forget— it was an elective though. It was neat, the teacher even introduced us to Ruby, which was a cool experience for a bunch of dumb kids who thought the whole world ran on Java or C++ :) I liked it so much I went on to implement Ruby-isms in C++ for my other classes, and use those to orchestrate solutions. Completely dumb in retrospect, but I thought it was so cool at the time. And hey I guess that’s a great example of learning early on how to “hide the bad parts!”

2 months agocorytheboyd

I think DRY should be supplemented by "Don't make me read it multiple times". Repeat yourself, by all means, if that makes it so that I don't get all twisted up jumping between so many files that I can't keep the main thread straight while I read your code.

2 months ago4star3star

Another on this. If you abstract away code that throws or returns an error document that somehow.

The amount of times I've had to dig through 6 levels of abstraction to find what error gets returned in some edge case so I can catch and handle it gave me PTSD and gave me not invented here syndrome.

2 months agojpc0

Don’t repeat yourself…but don’t obfuscate the code either.

2 months agowduquette

New software demands a new vocabulary, and if those new concepts correspond exactly to implemented code, everything becomes very clear and possible to share with new people.

As well, the division of a project into layers where each layer has a manageable amount of concepts, 5-7 for normal layers, becomes much easier for people to learn and use correctly. If I have to keep 12 things in mind while using a layer, it's going to be a lot harder to get correct.

2 months agolanstin

Good API is for encapsulating and communicating ideas between other programmers or teams.

DRY is for improving communication with your future self.

2 months agojimbokun

"Hide the bad parts behind a good API... It works so much better than the bad parts being distributed everywhere!"

This doesn't work for the "bad things" I know of. All of the low-level bad parts which are truly bad/mucky, are bad because they can't be entirely hidden. They are leaky abstractions and by that fact, they ... leak. They impact every part of the system so they are "distributed everywhere" in the sense that the maintainer of a large program has to consider them everywhere even if they only directly show up in a few places.

Just as an example, programming languages are abstractions over the complex structure of "raw" memory. Pythons hide this abstraction more than for example c but once you reach a certain scale the use of python, you have to consider the details of its use of memory. And that's memory in general, which people work at to make reasonably regular as well as fast.

That's not saying you can't have an API that makes the low-level problems less.

2 months agojoe_the_user

This seems to be one of the core lessons behind John Ousterhout's _A Philosophy of Software Design_

https://www.goodreads.com/book/show/39996759-a-philosophy-of...

and I find that the mechanism of "Literate Programming":

https://literateprogramming.com/

is a useful one for doing this since it allows one to write about both how the low level details are implements in a function, _and_ how the written function is used in a way which allows the twain to support each other.

2 months agoWillAdams

Said another way: push interface up and implementation down.

2 months agokmoser

Well said, I’ll be taking that :)

2 months agocorytheboyd

I can't take credit for it; I read it in a C Users Journal article back in the 1990s which I haven't been able to find online.

2 months agokmoser

This is what I believe as well. Also throw in some Functional Programming (isolating pure functions and side-effecting functions) for extra benefits.

2 months agoreddit_clone

Both excellent points, and I would add a recommendation for high level organization: to consider the data then the process. If you can draw a graph where every data structure is only connected to a process, and vice versa, and that processes may take multiple inputs but only produce a single output type, it will make holding the entire system in your head a lot easier, even for very scaled-up systems.

Bonus points if you can distinguish between essential state and circumstantial state.

2 months agokevindamm

A good way to keep a program in your head:

Break it up into a few or several smaller programs that interact through clean interfaces. Then you can keep one smaller, simpler program in your head at a time, then integrate them at the end once all the smaller programs are working.

2 months agojimbokun

Those are called functions.

2 months agominkles

Sometimes.

Sometimes it is one or more programs writing to a queue or topic, and other programs reading from that topic.

Or programs writing to and others reading from a Unix pipe.

Or programs talking to each other using HTTP.

Or Erlang processes communicating concurrently on one machine or across a network.

Or different programs sharing one database.

Or many objects communicating by passing messages in a small talk program.

There are many ways to encapsulate programs and have them interact.

2 months agojimbokun

Or to put it another way…clean architecture is how you arrange the code so that you don’t need to keep ALL of it in your head at once, just the big picture and this bit here.

2 months agowduquette

Totally agree and it depends on the current context of the programmer. For example, in the ACPUL language the program is split into boot images, files, modules, functions and expressions. All of these represent different levels of context encapsulation

2 months agoellis0n

> Or programs writing to and others reading from a Unix pipe.

write(message), read() -> message

> Or programs talking to each other using HTTP.

request() -> response

> Or Erlang processes communicating concurrently on one machine or across a network.

sendMess(message), waitMess() -> message

> Or different programs sharing one database.

execute(query) -> response

...

I'm a mathematician at heart so I'm staying away from category theory as long as possible.

2 months agominkles

The UNIX way would be to break things down into actual programs, where each one would be doing one thing well - and in such a way that they'd be useful as building blocks for solving other problems also.

2 months agojll29

The unix way is re-parsing text 5000 times in separate programmes with no consistency or contract between them really.

That's not great.

2 months agominkles

Only great enough to be the operating system for the vast majority of computing devices on the planet.

2 months agojimbokun

In close competition with the well renowned JavaScript.

2 months agoToo

This must not be done prematurely. Replacing function calls with processes quickly break “go to definition” and all the other conveniences or guardrails provided by your compiler toolchain.

Unfortunately been dropped into that situation. The system was constantly broken due to missing of malformated arguments. Nobody dared to change anything.

Same applies to prematurely microservicing a monolith. At least there folks seem to be sensible enough to use protobufs or json, instead of loosely typed strings.

2 months agoToo

imagine having the brain to synthesise every function across a codebase , most humans gotta settle for interfaces :p

2 months agoAeglaecia

Interfaces are just collections of function prototypes.

2 months agominkles

and some projects have more interfaces than are memorable by the average human

2 months agoAeglaecia

Making as much of the program as possible pure functions and enforcing types works extremely well for this task. I would only add clean _enforced_ interfaces are key.

2 months agomejutoco
[deleted]
2 months ago

Brilliant article. I read a brilliant book written by Jessica Livingston (Paul Graham's wife) called Founders at Work. In the book Steve Wozniak talked about how he got designs done fast by keeping it all in his head. Ever since I read that I have tried to do the same. I would recommend this to anyone working on their own project.

Of course in a corporate setting it is much harder to do as you have code written in multiple languages using multiple patterns stored in different places, so often there is not underlying architectural thread to hold it all together.

2 months agozubairq

One of the things I liked about programming the Apple II was that I could also hold the whole machine in my head. It's quite a contrast to wondering which of the 4,000 methods on a Java textbox I need.

2 months agotonyedgecombe

I once watched a presentation by Dan North where he said that a microservice should never be bigger than your head. What he meant was that all the code for the microservice should fit on your screen and you should be able to put your head against the screen and it should cover the code.

Yes, this was in the microservices-heyday.

2 months agomingusrude

A quote originally (AFAIK) from the wonderful book 'Practical Common Lisp'.

https://gigamonkeys.com/book/

2 months agofredrikholm

This is probably a common; lisp/scheme type of thought. Dan Friedman also said something about how he only likes code that he can hold in his head to think about in the shower. I forgot the source, but it's in one of the talks. I think Sussman has also said something similar.

2 months agojmkr

Anything you create in software should be able to fit in someone else's head. I.e., you should be able to think of it as a tractable arrangement of abstractions that let you reason about it in a precise, non-leaky way.

Those abstractions don't just poof into existence after the project is complete, though. You have to design them into the system and communicate them to the people who need them.

The abstractions also need to be precise and non-leaky enough to be useful. One of the most dangerous talents in software is the ability to create the illusion of tractability by using vague language. You can create an absolute mess and then describe it to management in a way that makes it sound well-understood. This is the most lucrative and destructive skill a consultant can have.

2 months agodkarl

Unfortunately, all the market giants follow this destructive practice to rewriting 95% of all code every five years to multiply added value.

2 months agoellis0n

I'll call out

   7. Don't have multiple people editing the same piece of code. 
   You never understand other people's code as well as your own. No matter
   how thoroughly you've read it, you've only read it, not written it. So
   if a piece of code is written by multiple authors, none of them understand
   it as well as a single author would.
On some level it's true but it is also true that most of the world's code is in some sort of maintenance mode and the original developer is not always available. When I work on code that I think would be difficult to maintain I write a lot of comments to explain invariants that should be followed or anything strange. (Hmm, there are three different kinds of "row" talked about in this source file)

If you have a front end-back end system and you want to do something simple like add a field to a form there's a strong case for a single developer or maybe paired developers to make all the changes to make that change happens as opposed to assign two developers (not paired) to communicate with each other about doing the task, or worse yet, assign two teams. You might have had two people build out the front-end and back-end systems but for sustainable maintenance one person should be able to ship a feature.

2 months agoPaulHoule

When I’ve had to take over maintenance of code that isn’t mine, and is poorly commented and documented, which is usually the case, I work through the code and comment it. That eventually leads to refactoring, adding unit tests as needed, and so on…and eventually the code that wasn’t mine is now mine.

But there’s no time in the schedule for that? Either that’s not true (the long way round is usually the shortest way home); or it’s time to find another project. And if you’re the sort of person who has the skills Graham describes, that shouldn’t be hard.

2 months agowduquette

I do pretty much the same. If any part of the code makes me go "wtf" and sends me down a rabbit hole to figure out why it's like that, I'll typically write a comment about it to save myself and the next guy from that.

However I have one quibble. Almost invariably when I explore other people's code, there's a lot of these "wtf" moments and some times they end with "oooh that makes sense" but honestly most of the time they end with "ugh, this is stupid".

For example i was tasked with factoring out Newtonsoft.Json in favor of System.Text.Json in an Azure function. I looked through the code and it was some of the worst shit I've ever seen. And completely untested, there were two test classes with a few test functions but they literally tested nothing. The only way you could have failed those tests would be by adding a `throw new Exception()` as the first line of the function the test was calling. Seriously. The rest was a for each loop iterating over a list parameter and the test was passing an empty list.

The core of the whole thing was this big chain of functions where over a dozen different parameters were passed down through about two dozen different methods, some weren't even used, some were repeatedly serialized to json, passed as a string to the next method, deserialized and passed as an object to the next method etc. There was this whole complicated AsyncEnumerable setup that served no purpose at all except complicating the code. There was a lot of other stupid stuff. I really don't think my description adequately conveys how bad this code was.

It was fucking atrocious. A coworker did some pair programming with me and suggested asking the author about it, I said I didn't want to talk to him because I didn't know what to say. I don't think I could have had that conversation without just completely shitting all over his entire project. There weren't any questions to ask, there was no "maybe I just don't understand the reasoning". I understood the code perfectly and it was shit. Plain and simple.

So I did what I tend to do in these situations, which is the subject of my quibble - I rewrote it. Instead of trying to figure out how to do what I needed to do in the context of this complete mess, I wrote some tests to establish the current functionality (surprise, it uncovered multiple glaring bugs - not sure how nobody noticed it wasn't even working properly for months) and then I just deleted all the trash code and wrote it properly. It wasn't a very complicated program, it literally just gets some data from a few API endpoints, massages it a bit and sends it off to an event hub as messages.

So my quibble is that a lot of people don't appreciate this approach. They call it scope creep, I get some task that should be fairly small but it leads me down a rabbit hole where I end up rewriting or refactoring large chunks of code in addition to what I'm supposed to do.

I think it's a nuanced topic, I think my approach is appropriate for some situations and not for others. For example if I'm working on an app that only needs to exist for another 6 months it's reasonable to minimize the effort spent on it. Doesn't make sense to refactor stuff. In the case I just described I think the rewrite was appropriate. My team agree with me on that, though I'm not sure everyone would. And I'm having trouble drawing that line, when should I fix stuff and when should I work around it? I love great code and I hate bad code, it honestly really bothers me when I have to work with some moron's spaghetti. So I think I'm pretty biased towards fixing stuff like that when I see it. I'm happy to be in a team that appreciates this side of me but I'm worried about when I inevitably end up in one that doesn't.

2 months agosfn42

Ultimately you have two choices: you refactor the architecture as needed as part of implementing the current feature...or you end up drowning in technical debt farther down the line.

A properly architected code base makes it easy to make the kind of changes you want to make. A badly architected code base (as you describe) makes it nearly impossible. I've found it's usually by far the quickest thing to fix the architecture and then implement the new feature sanely. (And no, you don't ask permission; you just make everything clean. As I noted above, if your boss doesn't like this kind of thinking, you need a new boss.)

Of course, I'm one of those guys Graham mentions who works by himself most of the time.....YMMV.

2 months agowduquette

Yeah, I completely agree with you. I might be a bit "scarred" by my previous team. We were working on CMS websites for a fairly large number of different clients. Every task was estimated by the hour, and if we went over that estimate it usually meant we were working for free. I mean I still got paid but my company didn't. It didn't help that all the solutions were either developed by Juniors or inherited from other departments/companies. Literally every moderately popular frontend tech was represented. React, Vue, AngularJS, jquery, nextjs, svelte, and more. Let's just.make our lives as difficult as possible eh, what should we use next? The back ends were all the same CMS luckily, but of course some genius had to go and add Sanity to the mix to pad their resume.

So I frequently had to learn a new js framework while dealing with shitty, unreviewed and untested code that barely worked or just kind of seemed to work, while under time pressure.

In that environment, my disdain for bad code really just didn't work out. Though I feel like I was a bit unfairly treated at times as well.

There was this one app that had an integration against a third party API. They had used a library to interact with the API, this library was purchased for a fairly significant amount of money from some university and it was complete garbage. All it did was get data twice a day and store it in a db, then the CMS had a daily scheduled job that would get the data from that db and build the content in the actual CMS db. It was like 30k loc, a lot of which was dead and the rest of it was shitty and redundant manual xml parsing and completely unnecessary db interaction. Not to mention the (probably minor) cost of running this pointless db.

So there's a problem with it and I spend days bughunting, I can't figure out what's wrong. I try telling the team that I think I can build a better solution from scratch faster than I can fix this bug. No bueno. So over the weekend, on my own time, (really just a few hours) I throw together a quick POC. I don't need the intermediate DB, the CMS can just get the data from the API once a day during the scheduled job. So 90% of the work is just making DTOs to deserialize the xml into, using whatever xml serialization library i found.

I finish it, hook it up, it's not totally done but it pretty much works already. On Monday I tell the team what I've done and just get completely shut down. They don't even want to see it, just nope.

So that code is gone and they're still maintaining that piece of shit library for no reason. Oh well, luckily it's not my problem any more.

Sorry about the rant, and thanks for the support. It's nice to know I'm not alone in seeing the value of quality.

2 months agosfn42

I think it was 2007 when the vision of software, developers and resources was still limited. We didn't even have IoT and the first iPhone was seen as a novelty. Now it's clear that this is not the case. I founded a company in 2006 and started focusing on code review because I was interested in code and wanted to be the best. Since the advent of GitHub, I have reviewed thousands of repositories across many languages, worked on architecture, tried to create better code, developed several operating systems, hundreds of projects and dedicated 12 years to programming languages. In the end, I realized that the world is ruled by chaos and the simplest LLM AI is best suited for the task of code functionality.

For example, consider this analogy: when we make a car, we have one wheel and a multitude of parameters that change over time. The wheel is a derivative of these parameters. In car manufacturing, time is virtually unlimited for improving the quality of the wheel. But when we develop software, we always face time constraints and an enormous set of parameters. Thousands of car engineers spend decades improving a single wheel with a limited number of parameters, while a single programmer (or a couple of programmers) have only a short amount of time to release software with a randomly variable number of parameters (a service depends on Auth0 and its parameters change). As a result the programmer cannot be certain that all parameters have been correctly handled. This is why the programming process is similar to learning, and this is where LLMs are the best.

2 months agoellis0n

Actually I think it was in 2007 that I was leading my company's first IoT project to get remote access to our medical instruments' operational data.

2 months agoHeyLaughingBoy

I more or less agree with the main thesis although I would point out that typical software development practices in industry make this very difficult. Fred Brooks told us to plan to throw out the first draft. Industry says ship it.

2 months agonorir

That is true. Then these drafts travel, fly, and feed you. Don’t be surprised by plastic in every micron of space — it was just a draft. By the way, the first refrigerators without Freon also caused harm, but back then people were only thinking about reliability and didn’t have the resources to tackle the full scope of the problems. Now the goals have changed driven by the pursuit of profit

2 months agoellis0n

Another PG classic masterpiece - thanks for re-posting, I agree with all eight points, and this should be mandatory reading for people that manage developers particularly if the new manager does not herself/himself have a development background.

Just to elaborate on "start with a simpler subcase": you can even start with a single example and do what your code will be supposed to be doing automatically by hand. After your draft architecture stands, you can even hardwire what each component does for that one example to see if things fit together before implementing the general method for each component, which provides an opportunity for early integration testing that is priceless.

The current essay recommends building bottom-up primitives that serve as primitives or languages (DSLs) for the next levels. This is true, but you can do this upside down. For example, you can write code that looks like pseudo-code because implementations of functions you call do not (or not yet) exist. The code directly reflects how you think, and you worry about the implementation later. A special instance of that is API design, where you design interfaces without implementing them yet; you can then write client code and judge if the hypothetical API would be flexible enough, be easy enough to use, and be feature complete and orthogonal. When designing an architecture in a team with multiple people, you can use paper cards with an example piece of data that you pass from person to person, with each person "performing" a component. They can then check that they have what they need to carry our their task (otherwise there may be bad surprises when integrating components later).

I found that some people are more leaning "top down" and others more "bottom up"; I like to mix both styles, with system architectures designed top down and the core algortihms inside their components often being designed bottom-up, sometimes also top-down.

Ironically, looking only at the headline, one could say that abstraction enables one to solve problems without getting the whole problem in one's head, or problems bigger than what anyone can hold in their head, at least not at the same time, which kind of is the whole point made by SICP.

2 months agojll29

Take a look at the list of people who read the draft of this post.

> Thanks to Sam Altman, David Greenspan, Aaron Iba, Jessica Livingston, Robert Morris, Peter Norvig, Lisa Randall, Emmett Shear, Sergei Tsarev, and Stephen Wolfram for reading drafts of this.

2 months agorshudson

I’m always amazed at the impressive list of people pg gets to review his essays. His recent essay “Founder mode” was no exception.

2 months agojoshdavham

People who are famous for running companies and not writing code.

2 months agoakira2501

Robert Morris, Peter Norvig and Stephen Wolfram didn't write code?

(I don't recognize some of the others.)

2 months agojimbokun

Is that what they're famous for or even what they've dedicated most of their time to? From looking at their histories I would not describe them as "coders." Let alone presume that they have anything to offer anyone doing the same in a modern context.

"Keep the whole program in your head?"

Cool advice for small greenfield projects that you can sell to unsophisticated companies and then move on. Interesting that it comes from an investing firm that makes this precise activity it's core business and reviewed by people who are in a position to directly benefit from this.

This is all terrible advice if you want to actually contribute to or work on anything significant in your life.

2 months agoakira2501

yes, writing code is what all three of them are most famous for. especially writing code for vast, significant projects spanning decades, in a way that contributes to something significant. your criticism could hardly be further off target. you dismiss their knowledge at your peril

2 months agokragen

I don't know how far this can go but it's a feeling that I've had which is relevant and some people might resonate with.

I used some tiny devices (e.g. Casio PB80 http://oldcomputermuseum.com/casio_pb80.html) as a kid to write small programs on in BASIC. It has only a single line of display and you had to sort of remember where things were to make your GOTOs work properly etc. This means that most of the program had to be in your head. You couldn't easily jump between things and see the whole thing as a single screen of text. I also remember reading that people like Ken Thompson wrote significant parts of the original UNIX on a line editor (ed). That would also mean that most of it had to be in his head in a semi parsed format.

Sure, with modern editors and IDEs, you can outsource that to the computer and focus on the more valuable stuff but I still wonder how much the grunt work of remembering things in detail is under appreciated in creative work.

2 months agonoufalibrahim

I'd add "Focus on the Nouns, not the Verbs", or more prosaically, the data and the changes that occur to data, not the process of changing it.

2 months agorswail

A great article that accurately describes the challenges I've faced over the past 12 years while creating the best project for programmers and the various attempts to find work and funding, observing all the madness in the organizations It seems like the best of it was left in 2007, from which we still draw inspiration and knowledge, while since 2008 we've been living in a new world of marketing and wars. The insatiable interests of faceless systems are destroying everything alive in this great science of "programming".

2 months agoellis0n

As a developer who has aphantasia (can't visualize at all), I'm curious how much visualization is used by others when "holding a program in your head". I can hold a program in my head just fine without visualizing anything, I'm just curious how much of it involves visualization for others.

2 months agogodshatter

"Use succinct languages" is somewhat at odds with "Write rereadable code." There's a point beyond which making your code more succinct makes it more difficult for a human to parse. This can be somewhat mitigated by comments but I'd rather just read more readable code than more succinct code.

2 months agokmoser

You always have comments. I’ve always been fascinated by GitHub repositories like "odd-even" where the code is just a few lines, but the README takes up several times more text. I even started a small collection of these fun React modules :)

2 months agoellis0n

Succinct languages don't force you to be succinct. They only allow you to be succinct where it helps.

(I'm sure there are exceptions, but in the set of languages that enable succinctness, the subset that force succinctness is surely small.)

2 months agodkarl

Sure, but it's all so vacuously ambiguous that it doesn't really matter. I hate advice lists like this because it can essentially be summarized by saying "write good code" and you'd be left with almost exactly as much wisdom.

What does "succinct" mean? Well, it means terse but not too terse. What does "readable" mean? It means I can easily understand it.

How is this helpful?

2 months agoragnese

I completely disagree.

It's possible to write code so succinct that making it more verbose would make it more understandable.

But overly verbose code is far, far more common. And having a naturally succinct language does not stop you from keeping your code verbose. Just gives you more ways to make your code succinct that aren't possible in non-succinct languages.

(Not this is orthogonal to weaker or stronger compile time type systems. Type systems that catch more errors at compile time are very useful. But some strongly compile-time typed languages are still quite succinct.)

2 months agojimbokun

I think this is addressed in the link under that section

https://paulgraham.com/power.html

> I think that the main reason we take the trouble to develop high-level languages is to get leverage, so that we can say (and more importantly, think) in 10 lines of a high-level language what would require 1000 lines of machine language.

...

> 5. Write rereadable code. All programmers know it's good to write readable code. [...] If you're writing for other people, you may not want to make code too dense. Some parts of a program may be easiest to read if you spread things out [...] Whereas if you're writing code to make it easy to reload into your head, it may be best to go for brevity.

2 months agojmkr

> You never understand other people's code as well as your own.

Not only is that often false, but there are times when you can understand another author's code better than that author does. Like to see through it and why it cannot possibly work, while they are laboring toward that.

2 months agokazinator

>Probably the best we'll do is some kind of hack, like making the programming parts of an organization work differently from the rest. Perhaps the optimal solution is for big companies not even to try to develop ideas in house, but simply to buy them.

I remember the CTO of a big American bank telling me they didn't want to develop any software in-house. Their plan was to buy everything in. At the time I thought it was rather strange but having dealt with a lot of enterprise software since then I can see why.

2 months agotonyedgecombe

>I remember the CTO of a big American bank telling me they didn't want to develop any software in-house. Their plan was to buy everything in. At the time I thought it was rather strange but having dealt with a lot of enterprise software since then I can see why.

It's a very pragmatic choice for the majority of business use cases. It should be more common imo, but some execs enjoy the empire building aspect and underestimate TCO when looking at cheap offshore dev resources.

2 months agointelVISA

> You can magnify the effect of a powerful language by using a style called bottom-up programming where you write programs in multiple layers, the lower ones acting as programming languages for those above. If you do this right, you only have to keep the topmost layer in your head.

And if you do it wrong (overengineering), you would need to learn not just one, but as many programming languages as there are layers...

2 months agosnowwlex

That's IMHO what OOP (as understood in Java world) tends to evolve to...

2 months agosnowwlex

Reminds me of when I spent some time learning functional programming. I did a Scala course and I still remember solving some of the programming challenges. The slightest lapse in concentration - I would immediately lose understanding of what I was writing. It would turn from something amazingly simple and elegant into a thing that was incomprehensible.

2 months agoiamflimflam1

Which is why working memory performance is so tightly correlated with so many factors (maybe not all) involving programming

2 months agobloqs
[deleted]
2 months ago

How relevant is this when AI has the ability to both write and debug massive code bases?

I imagine a future world where 100% of coding and debugging is done by prompt not by editing code.

2 months agoFrankLicea

If that comes to pass then I can foresee a world where code bases grow in size and complexity to the point of being barely maintainable. After all that's what we tend to do right now.

2 months agotonyedgecombe

It's effectively no different than the shift from coding in assembly to using a compiler. We could abstract a level of complexity away, which should always be a positive. There will always be a level of human decision making because AI relies on human decisions and choices, so higher level architecting will be the new "scripting language" and we will all become the new neckbeards complaining how nobody know what's happening under the hood.

2 months agolubujackson

I call this program in one's head "suspended comprehension" as it has a degree of transience and fragility that requires effort to resume where you left off.

2 months agokulor

It's (mostly) not about the code. It's about the data structures and their relations.

2 months agoamelius

As someone who has written programs professionally his whole career, I find the idea that any programmer holds an entire non-trivial program in their head to be laughable. Good software design means writing programs in such a way that you absolutely do not need to hold the whole thing in your head at one time.

2 months agoo_nate

> And of course you can't safely redesign something other people are working on.

Unless, you know, it's automatically tested to hell and back.

2 months agokazinator

[dead]