Someone said on HN a while back that this sort of approach is the most sensible, because any design doc is just an incomplete abstraction until you get in the weeds to see where all the tricky parts are. Programming is literally writing plans for a computer to do a thing, so writing a document about everything you need to write the plan is an exercise in frustration. And if you DO spend a lot of time making a perfect document, you've put in far more work than simply hacking around a bit to explore the problem space. Because only by fully exploring a problem can you confidently estimate how long it will take to write the real code. Often times after the first draft I just need to clean things up a bit or only half complete a bad solution before improving it. Yes, there are times you need to sit and have a think first, but less often than you might imagine.
Of course, at a certain scale and team size a tedious document IS the fastest way to do things... but god help you if you work on a shared code base like that.
I've always thought the rigid TDD approach and similar anti-coding styles really lend themselves to people that would rather not be programming. Or at least have a touch of the OCD and can't stand to not have a unit test for every line of code. Because it really is a lot more work both up front and in maintenance to live that way.
Cyber-paper is cheap, so don't be afraid to write some extra lines on it.
TDD is being done in weird ways by lots of people from what I've seen. I always understood the book's advice to never write code without a test as both aspirational, and productivity advice, not a hard rule.
My first job predated (at least our knowledge of) TDD and unit test frameworks. We would write little programs that would include some of our code and exercise them a bit during development. Later when everything was working and integrated, we'd throw it away. I believe that used to be called scaffolding (before Rails gave that term a different meaning).
When I got into unit testing and some degree of TDD a while later, I kinda kept the same spirit. The unit tests help me build the thing without needing ten steps to test that it works. Sure, I keep the tests, but primarily as documentation on how the parts of the system that are covered should behave. And when it's significantly easier to test something manually than to write a unit test, I tend towards that.
In languages that have good REPLs, I tend to write fewer tests, cause they function as a universal test scaffold.
Trying to reach 100 % test coverage and using unit tests for QA strikes me as strange. They're at most useful to quickly detect regressions. But most of these monster test suites become a burden over time from my experience. A pragmatic test suite rarely does. There's a lot of potential in having the right balance between unit tests, integration tests and manual testing. There's a lot of time wasted if the balance is off.
With this mindset, I totally write tests for a prototype if it looks like it'll save me time. Not even close to 100 % coverage though.
I use a mixture of both discovery and planning. The problem with pure discovery approach is I sometimes start to code when I am not completely attentive because of deadlines, and if I am working with a larger codebase it is difficult to instantly start writing code because you need some time to understand the context. I always have a text file open all the time where I note down stuff such as: context, plans, ideas, edge cases, conflicts, todos etc.; it serves as a swap memory.
I think it's a dangerous philosophy for professional work. You know what happens to code that works? It ships.
If you code out a solution to the problem in order to discover the problem space, I think the idea here is that you then can go back and write a better solution that accounts for all of the stuff you discovered via refactoring and whatnot. But you're not going to do that refactoring. You're going to ship it. Because it works and you don't know of any problems with it. Are there scaling problems? Probably, you haven't run into any yet. Does your solution fit whatever requirements other interested parties might have? Who knows! We didn't do any designing or thinking about the problem yet.
This depends on whether your organization benefits from getting things done, or whether it's better to do nothing than 90% of the solution. Lots of organizations build elaborate work-prevention infrastructure to make sure that nothing less than 100%, or even 110%, gets done.
> Does your solution fit whatever requirements other interested parties might have? Who knows! We didn't do any designing or thinking about the problem yet
Often people don't know what they want until you show them something, at which point they start telling you what's wrong with it. Design by "strawman".
Many heavily lauded startup businesses were built on code that barely worked and got backfilled later.
Any person who would not go back and do that necessary redesign is unfit to call themselves a craftsperson.
Any organization that would not demand you do that necessary redesign is a organization unfit for producing critical systems.
Any organization that would go even further and prevent/de-prioritize you from doing that necessary redesign is unfit to call whatever it is that they do “engineering”.
Why would you want to work at such a dystopian hellscape if you have the choice?
Note that this is only about shipping the known incomplete design to customers because it “works”.
And to get ahead of it, yes, startups selling systems held together by spit and twine are perfect examples of “unfit for critical systems”; you should not bet critical things on them until they mature.
Discovery code doesn't have to be bad code. My rule of abstractions is that I defer them until they are needed, even if I know that they will be needed in the next sprint or whatever. I'll do a much better job making a precise abstraction when faced with an existing solution and a new problem.
Even if discovery code were bad code; I'll take "bad" code 8 out 10 times if the competition is abstractions for abstractions' sake (which "good" code often is). Bad code is often simple and direct, and therefore simple to comprehend and fix. Fancy code is a game of jenga.
Also, requirements often change by the next sprint - so my well-laid plains would be moot either way.
One of things that a lot of people are NOT used to- Always writing good code, no matter what you are writing it for.
A while back I was building a writing small snippets to fix a production issue. A colleague looked over my shoulder and said you don't have to do all that, just get it done for now.
He was quite surprised when he learned that is how I always wrote code. It feels like people have different coding styles for different situations. They do tend to have a mode where they are doing it for production, and they are quite slow at it.
> You know what happens to code that works? It ships.
That's a process problem. Even professional writers have "drafts", you shouldn't be shipping first-draft code regardless. I guess maybe you are the equivalent of a 2 cent rag, and then product sucks, but we won't get into all the ways you can suck as a software org...
Programming is neither writing an article nor designing an engine, its somewhere in the middle. You can apply "discovery" or "drafting" to the professional process as much as you can apply engineering design paradigms. The formal act of writing (unit) testing provides this opportunity, I find, to take the "discovery" implementations, harness them in specific "design" requirements, and produce a polished product.
TDD had this backwards, design then develop, as if that would somehow produce fantastic designs (it doesn't, just a lot of test code). BDD (Behavior Driven Development) is better here, you aren't driven by your tests as much as your behaviors. You may discover them, but once you do you test that they continue to work correctly.
This is basically waterfall and agile in a nutshell.
The extreme end of the other side of the spectrum is research and documentation to the extreme, coming up with a rigid, long development process that doesn't readily allow for any sort of iteration.
The only real question is what is the difference between a working prototype that gets thrown away and an actual MVP. You need buy in from the business up front, otherwise you won't be given the option to refactor or redo parts, you'll be told it is good enough and there are features that need shipping.
If this were the prominent mentality, some of the greatest pieces of software there are would never have existed because they would run into too many brick walls to even try.
Do you never go back and redesign things later? Can we not ship unless we have all ends neatly tied up?
So better to do all discovery in a system/tool that has no risk of being shipped like excel, whiteboard, design document, conversation, human brain, etc?
Am I going to ship it and not address scaling, refactoring, etc? What about discovery outside a code environment forces our hand on this?
Feels a bit to me like the real issue is not where discovery takes place but that crucial discovery steps are skipped for whatever reason.
I'm stuck somewhere in or around 1 & 2 below.
1) those other systems for discovery suck at capturing behaviors of complex systems compared to working code in motion. Happy to discuss, but I'd love if this were an obvious conclusion somewhere in the neighborhood of The Map is Not The Kingdom trope
2) skipping important steps in discovery sucks no matter where it occur. I've seen this institutionalized/practiced at both ends of the philosophical spectrum. I wasn't being sarcastic or facetious (much) about the questions above.
I'd love it if someone could help me move somewhere past the above. I think I might sleep better at night.
This is mostly my experience, except that I DO know the problems.
I don’t get to define the deadlines, that’s up to someone in a department entirely unrelated to engineering. Therefore I only have time to ship fragile, barely-tested code which is built on fragile untested legacy code.
>You know what happens to code that works? It ships.
That depends on how you talk about where you are in the process. If you tell people it is done then why wouldn't it ship?
>If you code out a solution to the problem in order to discover the problem space,
At this point I would say I have figured out the problem but am still coding the solution. Because that is actually where I am in resolving the issue, the solution is known but the code to resolve it isn't complete.
One way to protect against this is to explore in a language that your coworkers don't care about. Once the concept is proved, you can rewrite in whatever they prefer.
> I think it's a dangerous philosophy for professional work. You know what happens to code that works? It ships.
It's worse than that this happens to code that appears to work.
But of course people who work on "prototypes" don't code for robustness, they code for "delivery time", then of course when you ship the prototype, the result is ugly..
I am an avid discovery coder and was actually day dreaming an outline for a similar article on my way home on the bus today. I think this is an extremely important concept at all levels of engineering and something we all need to adopt at one point or another in our careers / practice.
I think it follows a few topics, "the art of the POC/Spike" or just exploratory coding. These things give us a tangible hands on approach for understanding the codebase, and I think lend to better empathy and understanding of a software system and less rash criticisms of projects that may be unfamiliar.
This is particularly relevant to me right now as I am discovery coding a fairly large project at my company and working with product to lay out design and project planning. Whats difficult to express from my current standing is how the early stages of these types of projects are more milestone / broad based rather than isolated small key pieces. Sure I can spend a week delivering design, architecture, epic, outline docs for all the known and unknown features of the project (and I am). But at the same time I need to discover and test out base case / happy path solutions to the core business problem to more accurately understand the scope of the project.
I think its something I particularly love about being a TL / IC at my company. I have the flexibility and trust to "figure it out" and the working arrangement to provide adequate professional documentation at the appropriate time. I am fortunate to have that buy in from leadership and certainly recognize it as a unique situation.
All that being said:
1. Learn how to effectively isolate and run arbitrary parts of your system for YOUR understanding and learning
2. Make it work, make it right, make it fast.
3. Learn to summarize and document your findings in a suitable fashion for your situation
4. Encourage this throughout your team. Useful in all aspects from bug triage to greenfield work
One trick I find helpful is to start by coding a subset of the problem with the goal of understanding the structure better. A brute-force solution, a simulation, a visualization of data, etc. And then use the discoveries of that process to do the real planning.
I call this exploratory programming, though my approach aligns more with the article I posted here, than with the Wikipedia definition.
I primarily use this method as a step preceding the actual production-quality implementation. It’s not like a prototype—I don’t throw everything away when I’m done. Instead, I extract the valuable parts: the learned concepts, the finished algorithms, and the relevant functions or classes. Unit tests are often written as part of setting up the problem, so I lift those out as well.
I’ve greatly enjoyed this approach, particularly in JavaScript&|TypeScript. Typically, I solve the difficult parts in a live environment and extract the solutions when I find them. I used to use my own "live environment" (hedon.js), but I eventually reversed the approach and built an environment around the built-in Node.js REPL (@dusted/debugrepl). I include this, at least during debugging and development builds, allowing me to live-code within a running system while having access to most, if not all, of the already-implemented parts of the program.
This approach lets me iterate at the function-call or expression level rather than following the traditional cycle of modifying code, restarting the program, reestablishing state, and triggering the desired call, something that annoys me to no end for all the obvious reasons.
A large fraction of code I write at work is either network protocol reverse engineering or interfacing with physical devices. Peeling stuff open layer by layer is often the only way to approach a problem and if I had to document everything beforehand, I would end up writing the same program twenty times.
I've never been a [traditional] artist, but I reckon that those working in the arts, and even areas of the programming world where experimentation is more fundamental (indie game development, perhaps?), would intuit the importance of discovery coding.
Even when you're writing code for hairy business problems with huge numbers of constraints and edge cases, it's entirely possible to support programmers that prefer discovery coding. The key is fast iteration loops. The ability to run the entire application, and all of its dependencies, locally on your own machine. In my opinion, that's the biggest line in the sand. Once your program has to be deployed to a testing environment in order to be tested, it becomes an order of magnitude harder to use a debugger, or intercept network traffic, or inspect profilers, or do test driven development. It's like sketching someone with a pencil and eraser, but there are 5-10 second delays between when you remove your pencil and when the line appears.
Unfortunately, it seems like many big tech companies, even that would seem to use very modern development tooling otherwise, still tend to make local development a second class citizen. And so, discovery coders are second class citizens as well.
Yea, TIL, I'm a discovery coder. Always found planning early in Greenfield projects kind a pointless. Planning is almost step 3 or 4. I almost always prototype the most difficult/opaque parts, build operations around testing and revising (how do you something is good enough?), and then plan out the rest.
Hard agree on local development. I always make apps run locally and include a readme that describes all the steps for someone else to run it locally as well.
Ideally that should be as simple as adding a local app settings file (described in readme so people don't have to start reading the code to figure out what to put in it) for secrets and other local stuff (make sure the app isn't trying to send emails locally etc), and running Docker compose up. If there are significantly more steps than that there better be good reasons for them.
I believe this is actually where Gen AI tools like Claude.AI come into their own. For example, in the past few weeks we needed to plan a complex integration project with both frontend and backend integrations, dependencies on data provided in backend and frontend, need to send data backwards and forwards from the 3rd party and our backend, etc, and in total probably a half-dozen viable alternative ways of doing it.
Using Claude plus detailed prompting with a lot of contextual business knowledge, it was possible to build a realistic working prototype of each possible approach as separate Git branches and easily demo them within about two days. Doing this also captured multiple hidden constraints aka "gotchas" in the 3rd party APIs.
Building each of these prototypes in the working Java codebase would have been a massive, time-consuming and pointless activity when a decision still needed to be made on which approach to go with. But getting Claude AI to whip up a simplified replica of our business systems using realistic interfaces and then integrate the 3rd party was super-easy. Generating alternative variants was as simple as running a script to consolidate the source files and getting Claude to generate the new variant, almost without any coding needed.
And because this prototype was built in few html and js files and run using node js, there is literally zero possibility of it becoming part of the production codebase.
I've seen the opposite happen maaaaany times, but I have never had what you describe happen that I can remember. Coding, if done top down, will find the real problems really fast. Discussions don't have this property of touching reality.
Discussing a problem is like theology.
Coding it is like science.
One involves thinking real hard, the other involves hard reality.
In my experience, this aphorism applies equally to any form of coding, and probably to nearly any complex human activity.
If you love writing outlines and plans, you can just as well waste time on that as the discovery coder does in their pathfinding. Not to mention the amount of time you can waste on refactoring and reorganizing.
I find that I always learn something valuable by diving in and trying ideas out concretely. High-flying plans can also cause a lot of wasted coding on things that won't work out.
Reminds me of the alleged programming approach of Dr. Joe Armstrong of Erlang fame (RIP): write a program, then rewrite it, then rewrite it, and so on until it's good enough.
That's also how I tend to program, though usually as an accidental consequence of my ADD brain getting distracted, then being entirely dissatisfied with my code (or worse: I was too clever with it and it's indecipherable) when I come back to it, prompting yet another rewrite.
Never thought of it this way, but it makes sense. My default response to probing/planning type questions from business is "uhhh no clue, I have to dive into the code first and find out" precisely because of this.
My workflow these days is too start by writing a Python notebook that solves the problem. There's no faster way to iterate on writing something. Once it works I usually have o1 Pro write tests, clean it up, and then convert it to whatever language I actually need it written in.
The actual term for a "discovery writer" is a "pantster" - i.e., you're writing by the seat of your pants - and I think that's a reasonable term to adopt here too.
Confession: I'm a pantster in writing both code and prose. In both cases, coming back and writing a spec (an eng spec in the case of code, a synopsis in the case of prose) is a reasonable thing to do. Structure is good, but the point is that it shouldn't get in the way of actually getting started and making some progress.
Nah. I'd almost argue that discovery coding is what helps define the architecture. I've seen way too many cases of over complexity designing a system for scale that will never be hit which results in an architecture that is too rigid to add new features to. If you do discovery coding you would realize the real bottlenecks and functionality that's required and can build an architecture that addresses those concerns instead of just designing for design's sake.
> I don't think many tools today are designed with discovery coding in mind. Things like live programming in a running system (see how (most) Clojurists or SmallTalkers work) are incredibly valuable for a discovery programmer. Systems that let you visualize on the fly, instrument parts of the system, etc., make it much easier and faster to discover things. This is the sense of dynamic (unrelated to typing) that I feel we often lose sight of.
Python's REPL achieves this nicely, but also in C# in Visual Studio, the Immediate Window, lets you type out code when you debug and have a breakpoint set. I almost always copy an If statement's expression there, and pase it, and get back either "true" or "false" which usually tells me insanely quickly if my assumption is spot on or not.
I like that Blazor has Hot Code Reloading, though it is definitely finicky.
Feels unnecessary to call it _____ coding and try to bless it with the air of methodology when it's just hacking or jamming just as you would on guitar or a drawing or other manual creative activity. you're just moving forward with whatever level of intuition your experiences and learnings afford you :shrugs:
Yeah, this is not a new concept and I remember seeing one of his videos that explains this concept really well. Here the relevant video from 9 years ago for those uninitiated:
I feel I finally can put a name to my coding style! Incidentally, I also like to pretend classes and functions exist even if they don’t. Of course my discovery code won’t work, but I can go very far and discover a lot of useful information this way.
In Pharo discovery coding is really encouraged. I personally like it for writing scrapers and/or web tests (I use Selenium in Pharo). I once had an improvized talk about it [1]. Nowadays, I'm back to Python and JavaScript due to their ecosystems, but for discoverability coding and Selenium-based testing/scraping, I still think the interactive experience in Pharo is unmatched.
Interesting piece. As a coder and writer (hardcore outliner here), I’ve thought about this too.
I wonder about writing user stories for fiction. If software is something people use to realize an outcome, fiction is also something readers consume to realize an outcome. What might a “reader story” look like for some of our favorite novels? What kind of impression or change does a writer seek to produce in a reader’s mind? Such documentation could be valuable, similar to requirements documentation.
Aside to the author: King’s first name is spelled with a “ph.” Sorry, long time fan here :)
I call this exploratory prototyping and I think it's an absolute super-power. Sitting down and noodling with a prototype for 30-90 minutes is an incredibly powerful way to build a deeper intuition for a project. You can throw that all away and still be in a much stronger position to design the approach (and have useful conversations about it).
Guilty, but as with anything there is a right time to use it and a time to most definitely avoid it. If you are working with others in particular there is very little room to wing it.
Experienced devs can do so in a first pass to flesh out an idea before letting the team get involved, but thereafter the design immediately becomes rigid. Inexperienced devs can wing it to learn in an exploratory way, but their work is unlikely to be re-purposable.
If you keep the first thing you happen to write while exploring I don't think it qualifies as exploration/discovery. Then it's just regular code monkeying.
Am an "outliner", currently working together with a "discovery coder" on a project. We are half a year in and have no common working build, just my "outline" and a bunch of non-integrated throwaway discovery bits. I do believe that eventually they will produce the solution, but it is very hard to reason about the timeline in such a setup.
It took me 10+ years to realizing I’m the opposite of a discovery coder. I’m much more efficient with thinking through the problem with pen and paper before even touching the keyboard.
I think if you also try to constrain yourself to only building orthogonal components it leads to a needs-based Lego set. I think it is difficult to see the underlying symmetry in a problem space if you design top-down.
> Discovery coding is a practice of understanding a problem by writing code first, rather than attempting to do some design process or thinking beforehand
When you write larger systems you better start making up your mind about the domain, understand it, come up with concepts and names that make sense instead of starting to code right away. Yeah, there are problem domains where an exploratory approach makes sense, but not when you create a product or a conplex system.
Often startups don't quite know what the product is until after they've been through several rounds of showing it to customers. Hence all the product-market-fit and "pivot" talk. There's no point in building a finely crafted wrong product. And people absolutely will not tell you what product they want built if it's genuinely innovative.
I haven't even read the article but the way I use this is to familiarize myself with new stuff. You can't plan a system when you have no idea how it's going to work. That's why it's called discovery, you just write code and see how things work, then you refactor or just throw out everything and start from scratch once you feel ready to actually build the thing.
If you just write whatever you happen to come up with and keep it that's just regular code monkeying.
It's also not a technique you use for an entire system, hopefully you're able to plan most things. But maybe a part of it integrates with a complex api you haven't used before, then it's nice to just take some time to figure out how that api really works at a level that's hard to achieve just by skimming the docs.
In my experience, some projects are very suited to discovery coding.
For my Machine Learning projects, I usually run a bunch of experiments either using pytest or directly on jupyter. I tend to plot out stuff and try out different types of feature engineering. Once I have nailed the approach, I then port a subset of this to a different jupyter notebook or python script, which is cleaner and more readable. This is because ML experiments are compute bottlenecked. So I want to ensure I spend enough time to pick the best features and model.
At work (which is not ML-related), I tend to do much less discovery coding because most of the unknowns are business related - does this API (owned by another team) scale, what is the correct place for this business logic etc. And, doing a viable Proof of Concept is time consuming, so I'd rather spend the time sweating out the nitty-gritties with product. The Discovery here must happen in the discussion with Product or other stakeholders because that is the expensive part. This is also why Product changing the Spec once the project is underway is infuriating. Sometimes a good chunk of the discovery is nullified.
I like it. This reminds me of the "tracer bullet" concept, as described in The Pragmatic Programmer by Andrew Hunt and David Thomas. As physical tracer bullets are used to illuminate the path to a target by glowing as they travel, the aptly named software development approach aims at creating a functional, end-to-end version of a system early in the project lifecycle. This strategy balances discovery and outlining, by deliberately planning for development of the minimal functional path that will help discover and overcome criticalities. No big castles of imagination, neither wasteful roaming, but targeted hands-on discovery. Key points here are: end-to-end functionality, real code (not throwaway, unlike a mockup or sketch), feedback-driven, interative improvements, risk reduction. Benefits: clarity, alignment, momentum, adaptability.
I always appreciate someone coining a useful expression, but I gotta say, sometimes these coiners are trying a bit too hard, no? I think I'll stick with "exploration-based", "exploratory" or "explorative" programming, which are commonly used and all sound less awkward to me than than "discovery coding". But hey, if that's what people flock to I'll get with the times! I've always used a combination of both approaches and the descriptions made here are great.
I agree, "exploration" style ... learning. Exploration is what we are in control of. Discovery is the natural and expected result, but isn't in our control.
Exploration style "learning" applies to anything.
But the phrase "exploration oriented development" does make sense. Given the useful and growing list of "X oriented development" paradigms that have been identified.
Exploration is especially good for:
1. Learning how to create things for fun.
2. Building solutions in unfamiliar areas.
3. For the most challenging greenfield problems and solutions.
In all cases, jump in, try things. Iterate as fast as possible to identify the most significant problem, define it as clearly as possible, find the smallest possible version of it, and try every angle to solve that. Then reorient and repeat.
Exploration is the right word, because you are letting the terrain guide you to unexpected problem perspectives, solutions, and means. From a learning goal standpoint, it is extremely efficient, and the specifics and wisdom discovered are often original.
I have spent my career doing greenfield work, chosen by me, which must be rare. Every day I push as hard as I can on the smallest problem surface I can find. The downside is that rate of progress is unpredictable, chaotic with very high deviation. And sometimes you expend a lot of time working your way into a real dead end. Another is the amount of code discarded can dwarf the final code by orders of magnitude.
“A programming language is for thinking of programs, not for expressing programs you’ve already thought of. It should be a pencil, not a pen.” - from PG’s “Hackers & Painters”
> For some reason, we have no such distinction in programming, so I am here to introduce it.
What? Of course there is. Exploratory programming and just writing code without big or even little design is (or was?) incredibly common. It doesn’t scale beyond one or two programmers so we don’t talk about it very much, but popular products like Minecraft originated this was, and almost definitely lots of one off useful tools fall into this category as well.
Even in big products, proof of concepts or prototypes can and are often done this way.
Common Lisp, with its excellent REPL, is great for this type of exploratory work. The ability to build up a function from the inside out, and to build systems and classes from functions, again from the inside out, trivially creating mocks and ad-hoc tests as you go, is fantastic DX.
Someone said on HN a while back that this sort of approach is the most sensible, because any design doc is just an incomplete abstraction until you get in the weeds to see where all the tricky parts are. Programming is literally writing plans for a computer to do a thing, so writing a document about everything you need to write the plan is an exercise in frustration. And if you DO spend a lot of time making a perfect document, you've put in far more work than simply hacking around a bit to explore the problem space. Because only by fully exploring a problem can you confidently estimate how long it will take to write the real code. Often times after the first draft I just need to clean things up a bit or only half complete a bad solution before improving it. Yes, there are times you need to sit and have a think first, but less often than you might imagine.
Of course, at a certain scale and team size a tedious document IS the fastest way to do things... but god help you if you work on a shared code base like that.
I've always thought the rigid TDD approach and similar anti-coding styles really lend themselves to people that would rather not be programming. Or at least have a touch of the OCD and can't stand to not have a unit test for every line of code. Because it really is a lot more work both up front and in maintenance to live that way.
Cyber-paper is cheap, so don't be afraid to write some extra lines on it.
TDD is being done in weird ways by lots of people from what I've seen. I always understood the book's advice to never write code without a test as both aspirational, and productivity advice, not a hard rule.
My first job predated (at least our knowledge of) TDD and unit test frameworks. We would write little programs that would include some of our code and exercise them a bit during development. Later when everything was working and integrated, we'd throw it away. I believe that used to be called scaffolding (before Rails gave that term a different meaning).
When I got into unit testing and some degree of TDD a while later, I kinda kept the same spirit. The unit tests help me build the thing without needing ten steps to test that it works. Sure, I keep the tests, but primarily as documentation on how the parts of the system that are covered should behave. And when it's significantly easier to test something manually than to write a unit test, I tend towards that.
In languages that have good REPLs, I tend to write fewer tests, cause they function as a universal test scaffold.
Trying to reach 100 % test coverage and using unit tests for QA strikes me as strange. They're at most useful to quickly detect regressions. But most of these monster test suites become a burden over time from my experience. A pragmatic test suite rarely does. There's a lot of potential in having the right balance between unit tests, integration tests and manual testing. There's a lot of time wasted if the balance is off.
With this mindset, I totally write tests for a prototype if it looks like it'll save me time. Not even close to 100 % coverage though.
> I believe that used to be called scaffolding
We always called them "Unit Tests." Same for what we now call Test Harnesses.
Sometime in the last decade or so, "Unit Test" has become a lot more formalized, to mean the code-only structural testing we see, these days.
I tend to like using Test Harnesses[0], which are similar to what you described.
Unit tests are great, but I have found, in my own work, that 100% code coverage is often no guarantee of Quality.
I have yet to find a real "monkey testing" alternative. I suspect that AI may give us that, finally.
Oh, I also do "Discovery Coding," but I call it "Evolutionary Design."[1] I think others call it that, as well.
[0] https://littlegreenviper.com/various/testing-harness-vs-unit...
[1] https://littlegreenviper.com/various/evolutionary-design-spe...
I use a mixture of both discovery and planning. The problem with pure discovery approach is I sometimes start to code when I am not completely attentive because of deadlines, and if I am working with a larger codebase it is difficult to instantly start writing code because you need some time to understand the context. I always have a text file open all the time where I note down stuff such as: context, plans, ideas, edge cases, conflicts, todos etc.; it serves as a swap memory.
I think it's a dangerous philosophy for professional work. You know what happens to code that works? It ships.
If you code out a solution to the problem in order to discover the problem space, I think the idea here is that you then can go back and write a better solution that accounts for all of the stuff you discovered via refactoring and whatnot. But you're not going to do that refactoring. You're going to ship it. Because it works and you don't know of any problems with it. Are there scaling problems? Probably, you haven't run into any yet. Does your solution fit whatever requirements other interested parties might have? Who knows! We didn't do any designing or thinking about the problem yet.
This depends on whether your organization benefits from getting things done, or whether it's better to do nothing than 90% of the solution. Lots of organizations build elaborate work-prevention infrastructure to make sure that nothing less than 100%, or even 110%, gets done.
> Does your solution fit whatever requirements other interested parties might have? Who knows! We didn't do any designing or thinking about the problem yet
Often people don't know what they want until you show them something, at which point they start telling you what's wrong with it. Design by "strawman".
Many heavily lauded startup businesses were built on code that barely worked and got backfilled later.
Any person who would not go back and do that necessary redesign is unfit to call themselves a craftsperson.
Any organization that would not demand you do that necessary redesign is a organization unfit for producing critical systems.
Any organization that would go even further and prevent/de-prioritize you from doing that necessary redesign is unfit to call whatever it is that they do “engineering”.
Why would you want to work at such a dystopian hellscape if you have the choice?
Note that this is only about shipping the known incomplete design to customers because it “works”.
And to get ahead of it, yes, startups selling systems held together by spit and twine are perfect examples of “unfit for critical systems”; you should not bet critical things on them until they mature.
Discovery code doesn't have to be bad code. My rule of abstractions is that I defer them until they are needed, even if I know that they will be needed in the next sprint or whatever. I'll do a much better job making a precise abstraction when faced with an existing solution and a new problem.
Even if discovery code were bad code; I'll take "bad" code 8 out 10 times if the competition is abstractions for abstractions' sake (which "good" code often is). Bad code is often simple and direct, and therefore simple to comprehend and fix. Fancy code is a game of jenga.
Also, requirements often change by the next sprint - so my well-laid plains would be moot either way.
>>Discovery code doesn't have to be bad code.
One of things that a lot of people are NOT used to- Always writing good code, no matter what you are writing it for.
A while back I was building a writing small snippets to fix a production issue. A colleague looked over my shoulder and said you don't have to do all that, just get it done for now.
He was quite surprised when he learned that is how I always wrote code. It feels like people have different coding styles for different situations. They do tend to have a mode where they are doing it for production, and they are quite slow at it.
>>You know what happens to code that works? It ships.
there is nothing wrong with that, it is up to the developer and a team to enforce and gatekeep quality
or up to the business user to accept working code
> You know what happens to code that works? It ships.
That's a process problem. Even professional writers have "drafts", you shouldn't be shipping first-draft code regardless. I guess maybe you are the equivalent of a 2 cent rag, and then product sucks, but we won't get into all the ways you can suck as a software org...
Programming is neither writing an article nor designing an engine, its somewhere in the middle. You can apply "discovery" or "drafting" to the professional process as much as you can apply engineering design paradigms. The formal act of writing (unit) testing provides this opportunity, I find, to take the "discovery" implementations, harness them in specific "design" requirements, and produce a polished product.
TDD had this backwards, design then develop, as if that would somehow produce fantastic designs (it doesn't, just a lot of test code). BDD (Behavior Driven Development) is better here, you aren't driven by your tests as much as your behaviors. You may discover them, but once you do you test that they continue to work correctly.
It’s very simple.
As soon as the code works, tell absolutely no one.
Then rewrite it….
This is basically waterfall and agile in a nutshell.
The extreme end of the other side of the spectrum is research and documentation to the extreme, coming up with a rigid, long development process that doesn't readily allow for any sort of iteration.
The only real question is what is the difference between a working prototype that gets thrown away and an actual MVP. You need buy in from the business up front, otherwise you won't be given the option to refactor or redo parts, you'll be told it is good enough and there are features that need shipping.
If this were the prominent mentality, some of the greatest pieces of software there are would never have existed because they would run into too many brick walls to even try.
Do you never go back and redesign things later? Can we not ship unless we have all ends neatly tied up?
So better to do all discovery in a system/tool that has no risk of being shipped like excel, whiteboard, design document, conversation, human brain, etc?
Am I going to ship it and not address scaling, refactoring, etc? What about discovery outside a code environment forces our hand on this?
Feels a bit to me like the real issue is not where discovery takes place but that crucial discovery steps are skipped for whatever reason.
I'm stuck somewhere in or around 1 & 2 below.
1) those other systems for discovery suck at capturing behaviors of complex systems compared to working code in motion. Happy to discuss, but I'd love if this were an obvious conclusion somewhere in the neighborhood of The Map is Not The Kingdom trope
2) skipping important steps in discovery sucks no matter where it occur. I've seen this institutionalized/practiced at both ends of the philosophical spectrum. I wasn't being sarcastic or facetious (much) about the questions above.
I'd love it if someone could help me move somewhere past the above. I think I might sleep better at night.
I find this sentiment weird.
You don't want to pick low hanging fruit first?
You don't want to deliver value to the org fast?
You don't want to get the 80% solution done so that you can work on something else that needs more desperate attention?
This is mostly my experience, except that I DO know the problems.
I don’t get to define the deadlines, that’s up to someone in a department entirely unrelated to engineering. Therefore I only have time to ship fragile, barely-tested code which is built on fragile untested legacy code.
But not taking this philosophy is dangerous if you're trying to do something better than professionals.
>You know what happens to code that works? It ships.
That depends on how you talk about where you are in the process. If you tell people it is done then why wouldn't it ship?
>If you code out a solution to the problem in order to discover the problem space,
At this point I would say I have figured out the problem but am still coding the solution. Because that is actually where I am in resolving the issue, the solution is known but the code to resolve it isn't complete.
One way to protect against this is to explore in a language that your coworkers don't care about. Once the concept is proved, you can rewrite in whatever they prefer.
> I think it's a dangerous philosophy for professional work. You know what happens to code that works? It ships.
It's worse than that this happens to code that appears to work. But of course people who work on "prototypes" don't code for robustness, they code for "delivery time", then of course when you ship the prototype, the result is ugly..
> You know what happens to code that works? It ships.
The axiom I share with other engineers (real engineers) is "the first working solution is the final solution.
No, it will be "somehow working solution for case A" and "somehow working solution for case B" but not together.
I am an avid discovery coder and was actually day dreaming an outline for a similar article on my way home on the bus today. I think this is an extremely important concept at all levels of engineering and something we all need to adopt at one point or another in our careers / practice.
I think it follows a few topics, "the art of the POC/Spike" or just exploratory coding. These things give us a tangible hands on approach for understanding the codebase, and I think lend to better empathy and understanding of a software system and less rash criticisms of projects that may be unfamiliar.
This is particularly relevant to me right now as I am discovery coding a fairly large project at my company and working with product to lay out design and project planning. Whats difficult to express from my current standing is how the early stages of these types of projects are more milestone / broad based rather than isolated small key pieces. Sure I can spend a week delivering design, architecture, epic, outline docs for all the known and unknown features of the project (and I am). But at the same time I need to discover and test out base case / happy path solutions to the core business problem to more accurately understand the scope of the project.
I think its something I particularly love about being a TL / IC at my company. I have the flexibility and trust to "figure it out" and the working arrangement to provide adequate professional documentation at the appropriate time. I am fortunate to have that buy in from leadership and certainly recognize it as a unique situation.
All that being said:
1. Learn how to effectively isolate and run arbitrary parts of your system for YOUR understanding and learning 2. Make it work, make it right, make it fast. 3. Learn to summarize and document your findings in a suitable fashion for your situation 4. Encourage this throughout your team. Useful in all aspects from bug triage to greenfield work
One trick I find helpful is to start by coding a subset of the problem with the goal of understanding the structure better. A brute-force solution, a simulation, a visualization of data, etc. And then use the discoveries of that process to do the real planning.
I've heard this sort of activity called "pathfinding".
It's a "spike" in Agile parlance, if you want to sell this approach to agile people
I usually "play around with" with a problem or data, but "pathfinding" sounds much better.
I call this exploratory programming, though my approach aligns more with the article I posted here, than with the Wikipedia definition.
I primarily use this method as a step preceding the actual production-quality implementation. It’s not like a prototype—I don’t throw everything away when I’m done. Instead, I extract the valuable parts: the learned concepts, the finished algorithms, and the relevant functions or classes. Unit tests are often written as part of setting up the problem, so I lift those out as well.
I’ve greatly enjoyed this approach, particularly in JavaScript&|TypeScript. Typically, I solve the difficult parts in a live environment and extract the solutions when I find them. I used to use my own "live environment" (hedon.js), but I eventually reversed the approach and built an environment around the built-in Node.js REPL (@dusted/debugrepl). I include this, at least during debugging and development builds, allowing me to live-code within a running system while having access to most, if not all, of the already-implemented parts of the program.
This approach lets me iterate at the function-call or expression level rather than following the traditional cycle of modifying code, restarting the program, reestablishing state, and triggering the desired call, something that annoys me to no end for all the obvious reasons.
A large fraction of code I write at work is either network protocol reverse engineering or interfacing with physical devices. Peeling stuff open layer by layer is often the only way to approach a problem and if I had to document everything beforehand, I would end up writing the same program twenty times.
I really appreciate this essay.
I've never been a [traditional] artist, but I reckon that those working in the arts, and even areas of the programming world where experimentation is more fundamental (indie game development, perhaps?), would intuit the importance of discovery coding.
Even when you're writing code for hairy business problems with huge numbers of constraints and edge cases, it's entirely possible to support programmers that prefer discovery coding. The key is fast iteration loops. The ability to run the entire application, and all of its dependencies, locally on your own machine. In my opinion, that's the biggest line in the sand. Once your program has to be deployed to a testing environment in order to be tested, it becomes an order of magnitude harder to use a debugger, or intercept network traffic, or inspect profilers, or do test driven development. It's like sketching someone with a pencil and eraser, but there are 5-10 second delays between when you remove your pencil and when the line appears.
Unfortunately, it seems like many big tech companies, even that would seem to use very modern development tooling otherwise, still tend to make local development a second class citizen. And so, discovery coders are second class citizens as well.
Yea, TIL, I'm a discovery coder. Always found planning early in Greenfield projects kind a pointless. Planning is almost step 3 or 4. I almost always prototype the most difficult/opaque parts, build operations around testing and revising (how do you something is good enough?), and then plan out the rest.
Hard agree on local development. I always make apps run locally and include a readme that describes all the steps for someone else to run it locally as well.
Ideally that should be as simple as adding a local app settings file (described in readme so people don't have to start reading the code to figure out what to put in it) for secrets and other local stuff (make sure the app isn't trying to send emails locally etc), and running Docker compose up. If there are significantly more steps than that there better be good reasons for them.
I believe this is actually where Gen AI tools like Claude.AI come into their own. For example, in the past few weeks we needed to plan a complex integration project with both frontend and backend integrations, dependencies on data provided in backend and frontend, need to send data backwards and forwards from the 3rd party and our backend, etc, and in total probably a half-dozen viable alternative ways of doing it.
Using Claude plus detailed prompting with a lot of contextual business knowledge, it was possible to build a realistic working prototype of each possible approach as separate Git branches and easily demo them within about two days. Doing this also captured multiple hidden constraints aka "gotchas" in the 3rd party APIs.
Building each of these prototypes in the working Java codebase would have been a massive, time-consuming and pointless activity when a decision still needed to be made on which approach to go with. But getting Claude AI to whip up a simplified replica of our business systems using realistic interfaces and then integrate the 3rd party was super-easy. Generating alternative variants was as simple as running a script to consolidate the source files and getting Claude to generate the new variant, almost without any coding needed.
And because this prototype was built in few html and js files and run using node js, there is literally zero possibility of it becoming part of the production codebase.
This reminds me of a tongue-in-cheek phrase we used to use in college.
"Hours of coding can save minutes of planning."
"Discovery Coding" sounds fun, but be careful with your time!
The opposite is also true.
I've seen the opposite happen maaaaany times, but I have never had what you describe happen that I can remember. Coding, if done top down, will find the real problems really fast. Discussions don't have this property of touching reality.
Discussing a problem is like theology.
Coding it is like science.
One involves thinking real hard, the other involves hard reality.
> careful with your time!
In my experience, this aphorism applies equally to any form of coding, and probably to nearly any complex human activity.
If you love writing outlines and plans, you can just as well waste time on that as the discovery coder does in their pathfinding. Not to mention the amount of time you can waste on refactoring and reorganizing.
Also, minutes of planning without understanding the topic can lead to months of coding.
Can't plan what you dont know. That's the point, you discover/explore what you need in order to make a proper plan.
I find that I always learn something valuable by diving in and trying ideas out concretely. High-flying plans can also cause a lot of wasted coding on things that won't work out.
Reminds me of the alleged programming approach of Dr. Joe Armstrong of Erlang fame (RIP): write a program, then rewrite it, then rewrite it, and so on until it's good enough.
That's also how I tend to program, though usually as an accidental consequence of my ADD brain getting distracted, then being entirely dissatisfied with my code (or worse: I was too clever with it and it's indecipherable) when I come back to it, prompting yet another rewrite.
Never thought of it this way, but it makes sense. My default response to probing/planning type questions from business is "uhhh no clue, I have to dive into the code first and find out" precisely because of this.
My workflow these days is too start by writing a Python notebook that solves the problem. There's no faster way to iterate on writing something. Once it works I usually have o1 Pro write tests, clean it up, and then convert it to whatever language I actually need it written in.
The actual term for a "discovery writer" is a "pantster" - i.e., you're writing by the seat of your pants - and I think that's a reasonable term to adopt here too.
Confession: I'm a pantster in writing both code and prose. In both cases, coming back and writing a spec (an eng spec in the case of code, a synopsis in the case of prose) is a reasonable thing to do. Structure is good, but the point is that it shouldn't get in the way of actually getting started and making some progress.
Discovery coding may be fine, but discovery architecture is a disaster.
Nah. I'd almost argue that discovery coding is what helps define the architecture. I've seen way too many cases of over complexity designing a system for scale that will never be hit which results in an architecture that is too rigid to add new features to. If you do discovery coding you would realize the real bottlenecks and functionality that's required and can build an architecture that addresses those concerns instead of just designing for design's sake.
Yes and no. A lot depends on who the architect is.
> I don't think many tools today are designed with discovery coding in mind. Things like live programming in a running system (see how (most) Clojurists or SmallTalkers work) are incredibly valuable for a discovery programmer. Systems that let you visualize on the fly, instrument parts of the system, etc., make it much easier and faster to discover things. This is the sense of dynamic (unrelated to typing) that I feel we often lose sight of.
Python's REPL achieves this nicely, but also in C# in Visual Studio, the Immediate Window, lets you type out code when you debug and have a breakpoint set. I almost always copy an If statement's expression there, and pase it, and get back either "true" or "false" which usually tells me insanely quickly if my assumption is spot on or not.
I like that Blazor has Hot Code Reloading, though it is definitely finicky.
Feels unnecessary to call it _____ coding and try to bless it with the air of methodology when it's just hacking or jamming just as you would on guitar or a drawing or other manual creative activity. you're just moving forward with whatever level of intuition your experiences and learnings afford you :shrugs:
Casey Muratori talks about this in his Handmade Hero videos. I believe he calls it exploration-based programming.
Yeah, this is not a new concept and I remember seeing one of his videos that explains this concept really well. Here the relevant video from 9 years ago for those uninitiated:
https://youtu.be/jlcmxvQfzKQ?si=zmKT9a3yK5R4Wmg4
I feel I finally can put a name to my coding style! Incidentally, I also like to pretend classes and functions exist even if they don’t. Of course my discovery code won’t work, but I can go very far and discover a lot of useful information this way.
I’m this but the problem sometimes is I get overwhelmed by the stuff I’m working on. But honestly I wouldn’t do it any other way
In Pharo discovery coding is really encouraged. I personally like it for writing scrapers and/or web tests (I use Selenium in Pharo). I once had an improvized talk about it [1]. Nowadays, I'm back to Python and JavaScript due to their ecosystems, but for discoverability coding and Selenium-based testing/scraping, I still think the interactive experience in Pharo is unmatched.
[1] https://youtu.be/FeFrt-kdvms?si=g12m7aZtDWtMMgwJ&t=2270
Interesting piece. As a coder and writer (hardcore outliner here), I’ve thought about this too.
I wonder about writing user stories for fiction. If software is something people use to realize an outcome, fiction is also something readers consume to realize an outcome. What might a “reader story” look like for some of our favorite novels? What kind of impression or change does a writer seek to produce in a reader’s mind? Such documentation could be valuable, similar to requirements documentation.
Aside to the author: King’s first name is spelled with a “ph.” Sorry, long time fan here :)
I call this exploratory prototyping and I think it's an absolute super-power. Sitting down and noodling with a prototype for 30-90 minutes is an incredibly powerful way to build a deeper intuition for a project. You can throw that all away and still be in a much stronger position to design the approach (and have useful conversations about it).
Related concepts in Peter Naur's "Programming as Theory Building" [0] or Gerald Sussman's "Problem Solving by debugging-almost Right Plans" [1]
[0] https://pages.cs.wisc.edu/~remzi/Naur.pdf [1] https://www.youtube.com/watch?v=2MYzvQ1v8Ww
Guilty, but as with anything there is a right time to use it and a time to most definitely avoid it. If you are working with others in particular there is very little room to wing it.
Experienced devs can do so in a first pass to flesh out an idea before letting the team get involved, but thereafter the design immediately becomes rigid. Inexperienced devs can wing it to learn in an exploratory way, but their work is unlikely to be re-purposable.
If you keep the first thing you happen to write while exploring I don't think it qualifies as exploration/discovery. Then it's just regular code monkeying.
There's a time and place for everything.
I find that this style of programming is super useful in the technical design phase. During development I prefer to have a plan in place.
Am an "outliner", currently working together with a "discovery coder" on a project. We are half a year in and have no common working build, just my "outline" and a bunch of non-integrated throwaway discovery bits. I do believe that eventually they will produce the solution, but it is very hard to reason about the timeline in such a setup.
I wish was a more recognized process. Build a rough plan, then poke, and adapt.
http://lambda-the-ultimate.org/node/5335
It took me 10+ years to realizing I’m the opposite of a discovery coder. I’m much more efficient with thinking through the problem with pen and paper before even touching the keyboard.
For a similar set of ideas, see "A high-velocity style of software development".
https://news.ycombinator.com/item?id=42414911
I think if you also try to constrain yourself to only building orthogonal components it leads to a needs-based Lego set. I think it is difficult to see the underlying symmetry in a problem space if you design top-down.
How is this different from https://en.wikipedia.org/wiki/Exploratory_programming?
> Discovery coding is a practice of understanding a problem by writing code first, rather than attempting to do some design process or thinking beforehand
When you write larger systems you better start making up your mind about the domain, understand it, come up with concepts and names that make sense instead of starting to code right away. Yeah, there are problem domains where an exploratory approach makes sense, but not when you create a product or a conplex system.
Often startups don't quite know what the product is until after they've been through several rounds of showing it to customers. Hence all the product-market-fit and "pivot" talk. There's no point in building a finely crafted wrong product. And people absolutely will not tell you what product they want built if it's genuinely innovative.
Good point.
I haven't even read the article but the way I use this is to familiarize myself with new stuff. You can't plan a system when you have no idea how it's going to work. That's why it's called discovery, you just write code and see how things work, then you refactor or just throw out everything and start from scratch once you feel ready to actually build the thing.
If you just write whatever you happen to come up with and keep it that's just regular code monkeying.
It's also not a technique you use for an entire system, hopefully you're able to plan most things. But maybe a part of it integrates with a complex api you haven't used before, then it's nice to just take some time to figure out how that api really works at a level that's hard to achieve just by skimming the docs.
Agreed.
This is how I've operated for most of my career and I partly attribute my success and productivity with it.
Discovery is either unexpected or it takes a lot of resources.
That's why I prefer pseudocode.
In my experience, some projects are very suited to discovery coding.
For my Machine Learning projects, I usually run a bunch of experiments either using pytest or directly on jupyter. I tend to plot out stuff and try out different types of feature engineering. Once I have nailed the approach, I then port a subset of this to a different jupyter notebook or python script, which is cleaner and more readable. This is because ML experiments are compute bottlenecked. So I want to ensure I spend enough time to pick the best features and model.
At work (which is not ML-related), I tend to do much less discovery coding because most of the unknowns are business related - does this API (owned by another team) scale, what is the correct place for this business logic etc. And, doing a viable Proof of Concept is time consuming, so I'd rather spend the time sweating out the nitty-gritties with product. The Discovery here must happen in the discussion with Product or other stakeholders because that is the expensive part. This is also why Product changing the Spec once the project is underway is infuriating. Sometimes a good chunk of the discovery is nullified.
I like it. This reminds me of the "tracer bullet" concept, as described in The Pragmatic Programmer by Andrew Hunt and David Thomas. As physical tracer bullets are used to illuminate the path to a target by glowing as they travel, the aptly named software development approach aims at creating a functional, end-to-end version of a system early in the project lifecycle. This strategy balances discovery and outlining, by deliberately planning for development of the minimal functional path that will help discover and overcome criticalities. No big castles of imagination, neither wasteful roaming, but targeted hands-on discovery. Key points here are: end-to-end functionality, real code (not throwaway, unlike a mockup or sketch), feedback-driven, interative improvements, risk reduction. Benefits: clarity, alignment, momentum, adaptability.
I always appreciate someone coining a useful expression, but I gotta say, sometimes these coiners are trying a bit too hard, no? I think I'll stick with "exploration-based", "exploratory" or "explorative" programming, which are commonly used and all sound less awkward to me than than "discovery coding". But hey, if that's what people flock to I'll get with the times! I've always used a combination of both approaches and the descriptions made here are great.
I agree, "exploration" style ... learning. Exploration is what we are in control of. Discovery is the natural and expected result, but isn't in our control.
Exploration style "learning" applies to anything.
But the phrase "exploration oriented development" does make sense. Given the useful and growing list of "X oriented development" paradigms that have been identified.
Exploration is especially good for:
1. Learning how to create things for fun.
2. Building solutions in unfamiliar areas.
3. For the most challenging greenfield problems and solutions.
In all cases, jump in, try things. Iterate as fast as possible to identify the most significant problem, define it as clearly as possible, find the smallest possible version of it, and try every angle to solve that. Then reorient and repeat.
Exploration is the right word, because you are letting the terrain guide you to unexpected problem perspectives, solutions, and means. From a learning goal standpoint, it is extremely efficient, and the specifics and wisdom discovered are often original.
I have spent my career doing greenfield work, chosen by me, which must be rare. Every day I push as hard as I can on the smallest problem surface I can find. The downside is that rate of progress is unpredictable, chaotic with very high deviation. And sometimes you expend a lot of time working your way into a real dead end. Another is the amount of code discarded can dwarf the final code by orders of magnitude.
Damn this is me to a T.
“A programming language is for thinking of programs, not for expressing programs you’ve already thought of. It should be a pencil, not a pen.” - from PG’s “Hackers & Painters”
> For some reason, we have no such distinction in programming, so I am here to introduce it.
What? Of course there is. Exploratory programming and just writing code without big or even little design is (or was?) incredibly common. It doesn’t scale beyond one or two programmers so we don’t talk about it very much, but popular products like Minecraft originated this was, and almost definitely lots of one off useful tools fall into this category as well.
Even in big products, proof of concepts or prototypes can and are often done this way.
is incredibly
Common Lisp, with its excellent REPL, is great for this type of exploratory work. The ability to build up a function from the inside out, and to build systems and classes from functions, again from the inside out, trivially creating mocks and ad-hoc tests as you go, is fantastic DX.
Non discovery coding is just a failure, both on human and engineering levels.
I worked with a manager who was diametrically opposed to writing a singl single of code until a test had been written for it.
That's when I discovered I hate test driven development.
Cool stuff. I honestly think we need more of them.
There is no reason that a discovery programmer cannot create a highly structured, rigorous end-artifact.
They often stop before they do.
Designing software is a human process and not all humans are the same.
...nor is the quality of their results.