It’s a guideline. There are exceptions. Most randomly-selected 100-line functions in the wild would probably benefit from being four 25-line functions. But many wouldn’t. Maturity is knowing when the guideline doesn’t apply. But if you find yourself constantly writing a lot of long functions, it’s a good signal something is off.
Sure, language matters. Domain matters too. Pick a number other than 70 if you’re using a verbose language like golang. Pick a number less if you’re using something more concise.
People need to stop freaking out over reasonable, well-intentioned guidelines as if they’re inviolable rules. 150 is way too many for almost all functions in mainstream languages. 20 would need to be violated way too often to be a useful rule of thumb.
Some junior dev will then come along with his specific user story and, not wanting to tear the whole thing up, will insert his couple of lines. Repeat over a 10 year lifecycle and it becomes completely unmanageable.
I remember trying to tease apart hundreds of lines of functions in a language which didn't have the IDE support we have these days. It was always painful.
Even the most script like functions I've ever worked with benefit from comments like:
# Validate that the payment was successfully sent.
These subheadings can just as easily become functions. If you limit the context in each function it becomes a genuine improvement.After several such bad experiences, I religiously incorporated my personal guideline of functions never crossing the upper limit of 100 lines - with some 1 in 10k exceptions like in the parsing domain or state machine dispatch. Short functions also heavily aid in unit testing.
The only folks who believe in big functions are developers who haven't been exposed to large quantities of other people's shoddy code as a significant fraction of their working career. Once you have been through several such nightmares, you will worship small functions.
That’s why linters allow you to suppress specific rules for a single statement (and ideally let you write a short comment that explains why you chose to suppress it.)
It's using a parameter of secondary importance as a primary, so it's wrong with any number. The comment even has a helpful analogy to LOCs. People need to stop freaking out over reasonable, well-intentioned criticism of guidelines as if they were arguing something else like misundertstanding of that strictness of rules.
I for sure haven’t seen one. Most of the best, easiest to work on, and stablest projects I’ve worked in my 25-year career have had the overwhelming majority under 30 LOC, and maybe 1% over 70 LOC.
As with all smells, yes, they are simply correlated symptoms of deeper issues and not necessarily proof positive. But they are correlated. Pretending they aren’t is nonsense.
And yes, arbitrarily chopping things up to quiet a linter doesn’t necessarily solve the problem, much in the same way dumping perfume onto rotten eggs doesn’t fix the underlying issue. It’s a fast and easy warning to alert you to potential issues, but at the end of the day it’s still up to developers to exercise good judgment and development practice.
> Pretending they aren’t
But you're the one doing the pretending, making up high correlation with nothing to correlate, and coming to circular conclusions (if you define it as smell, then yes, you aren't taking about roses)
Of course a function can be refactored in a wrongheaded way as you’ve suggested, but that’s true of any coding - there is taste.
The ideal of refactoring such a function you describe would be to make it more readable, not less. The whole point of modules is so you don’t have to hold in your head the detail they contain.
Long functions are in general a very bad idea. They don’t fit on a single screen, so to understand them you end up scrolling up and down. It’s hard to follow the state, because more things happen and there is more state as the function needs more parameters and intermediate variables. They’re far more likely to lead to complecting (see Rich Hickey) and intertwining different processes. Most importantly, for an inexperienced dev it increases the chance of a big ball of mud, eg a huge switch statement with inline code rather than a series of higher level abstractions that can be considered in isolation.
I don’t think years worked is an indicator of anything, but I’ve been coding for nearly 40 years FWIW.
Split them up into multiple functions and there is more scrolling, and now also jumping around because the functions could be anywhere.
> It’s hard to follow the state, because more things happen
It's easier to follow state, because state is encapsulated in one function, not constantly passed around to multiple.
> a huge switch statement with inline code
Huge switch statements are a common reason for large functions. Python has this architecture and largely won because the interpreter is surprisingly easy to understand and extend.
No, this is the fundamental misconception used in defense of longer functions.
With a long function, you must scroll no matter what. If you're interested in every detail, you scroll. If you want to figure out the logical steps involved, you scroll (and likely have to take notes as you go). If you want to ignore a step for now, you scroll over it.
With properly factored code, you only jump away when you need to. If you want to figure out the logical steps involved, you just read them off. If you want to ignore a step for now, you just ignore it. If you're interested in every detail, you get to start with an overview, and then you jump around. But caring about details is the exceptional case. Most of the code works most of the time in any sane development practice; when the current code works, the details don't help with understanding the current function.
Jumping around is also not more difficult than scrolling, with a proper editor.
If you are passing your whole state around, that usually means, you’ve not designed your state well.
I would almost certainly split it up, not because "a function should only do one thing" but because invariably you get a run of several steps that can be chunked into one logical operation, and replacing those steps with the descriptive name reduces the cognitive load of reading and maintaining the original function.
A ReverseList function, for example, is useful not only because it can be used in many different places, but also because the same code would be more disruptive than helpful for understanding the overall process if it were inline. Of course, I understand that code does not always break down into such neat semantic building blocks.
> Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance.
100%!
On the other hand I often see functions like you describe - something has to be executed step-by-step (and the functionality is only used there) - where I _whish_ it was split up into separate functions, so we could have meaningful tests for each step, not only for the "whole thing".
If I have a huge function, and I can peel parts off into sensible well-encapsulated sub-functions, and I name them well, then my ability to comprehend the whole goes up.
If I do that, future me will thank me, because I will almost inevitably be back to that function at some point.
But for this to work, the sub-functions have to really do what they say, and not do anything more. I have to be able to trust that I can understand them by just their name and arguments.
I don't see that in my world so i'm naively trying to inline functions in codebases i'm familiar with and not really valuing the result i can dream up.
For one, my tests would be quite annoying, large and with too much setup for my taste. But i don't think i'd like to have to scroll a function, especially if i had to make changes to the start and end of the function in one commit.
I'm curious of the kinds of "long script" flavoured procedures, what are they doing typically?
I ask because some of the other stuff you mentioned i really strongly agree with like "Focus on separating pure code from stateful code" - this is such an under valued concept, and it's an absolute game changer for building robust software. Can i extract a pure function for this and separately have function to coordinate side effects - but that's incompatible with too long functions, those side effectfull functions would be so hard to test.
If you have a shopping list with 100 items, that does not mean the complexity of that is high.
Longer functions are worse, everything else being equal, but sometimes they have very low complexity.
To answer your question: if you have, for example, a function that returns for a key a certain value (basically a dictionary) for example for some translations that can be a very long function. Of course you could take the data out of the code into a data file, etc. but there is nothing wrong on principle about a function like that. A function like that is closer to extract state into data than one where that gets refactored into many small functions.
For me i see it alto in older video game engines with a switch statement that has 100s of possible states that need to be taken into account.
That kind of code is less common these days but its nice to be able to see all the possible states in a single switch statements vs them all being abstracted away in multiple classes that handle a subset of each state transition.
This is one case where having a super long function makes readability far better than abstracting away the states into multiple classes that all need to be read to understand the state of the world.
That's not the the function length per se. A function that is 1000 lines of mere basic assignments or holding a single giant switch can sometime be an apt option with careful consideration of tradeoffs as origin of the design. Number of line doesn't tell much of the function complexity and cognitive load of will imply to grasp what it does, though it can be a first proxy metric.
But most of the time giant functions found in the wild grow up organically with 5 levels of intertwined control control moving down and up, accumulating variables instead of const without consideration to scope span. In that case every time a change is needed, the cognitive load to grasp everything that need to be considered to change this finding is extremely huge. All the more as this giant function most likely won't have an test suit companion, because good engineering practices are more followed at equal level on several points.
Example:
def process_order(order):
_validate(order)
_reserve(order)
_charge(order)
_confirm(order)
def _validate(order):
...
def _reserve(order):
...To me, that's key here. That things are scattered over multiple files is a minor issue. Any competent IDE can more or less hide that and smoothen the experience. But if you have factored some code into a function, suddenly other places may call it. You have inadvertently created an API and any change you make needs to double check that other callers either don't exist or have their assumptions not suddenly violated. That's no issue if the code is right therr. No other users, and the API is the direct context of the lines of code right around it. (Yes you can limit visibility to other modules etc but that doesn't fully solve the issue of higher cognitive load.)
That is precisely the reason why for a function the scope of its container should be constrained. A class with a Singe Responsibility (SRP) would not suffer from several private methods. Breaking up your methods in a God Class brings both a decrease and an increase of mental load.
Also, in functional programming languages one can nest functions, and OOP languages oftentimes can nest classes.
Haskell does function hiding the most elegant imo, by means of `where`:
foo x =
if odd x then
short x
else
reallyLongFunc x
where
short y = y*2
reallyLongFunc z = z + 2If you were to simply leave it a mess like that, you would be right.
Who says that the 12 functions need to be in multiple packages? And you can also keep them in the order you call them.
Having 12 functions is probably easier to test, too.
Yes, the 70 lines rules is arbitrary. But it's only as arbitrary as using 80 columns.
and then i read code and see a 50 line function and am like “wow this function is doing a lot”
sure strict rules aren’t amazing, but i think it would be cool to have a linter warning when there are more than X functions with over 70 lines (this is language dependent - verbosity)
I see huge >130 line functions as a liability. There's so much state in it that a mistake on one line is not obvious. It makes those functions "sticky" and they tend to become the center of a lot of call graphs... like a neutron star. When a mistake is made maintaining or modifying this function it tends to have far-reaching side effects.
On the other hand some APIs (looking at you, OpenGL) are just so verbose that you can't avoid long functions.
I think it's generally good to compose functions from smaller functions where possible. Sometimes you can't and probably shouldn't. But it's hard to give a quantifiable rule in my experience. Approximations work but will never be perfect.
At least we aren't talking about "clean code" level of absurdity here: 5-20 lines with 0 - 2 parameters.
'Sup, it's me, the "new dev". Except I, too, have been at it for decades, and I get more and more attached to short functions year over year. (You are correct about composition and about isolating state mutations. But short functions are tools that help me to do those things. Of course, it helps a ton to have functions as first-class objects. Function pointers are criminally underused in C codebases from what I've seen. They can be used for much more than just reinventing C++ vtables.)
People put numbers on their advice because they don't trust the audience to have good taste, or to have a sense of the scale they have in mind. Of course that has the downside that metrics become targets. When I see a number in this kind of advice, I kinda take it in two passes: understand what kind of limit is proposed (Over or under? What is being limited?) and then go back and consider the numeric ballpark the author has in mind. Because, yes, 70 lines of Python is not the same as 70 lines of C.
But I can scarcely even fathom ten lines in a Python function that I write nowadays. And I'm rather skeptical that "LOC needed to represent a coherent idea" scales linearly with "LOC needed to make a whole program work".
> Now you have 12 functions, scattered over multiple packages, and the order of things is all confused, you have to debug through to see where it goes. They're used exactly once, and they're only used as part of a long process. You've just increased the cognitive load of dealing with your product by a factor of 12. It's downright malignant.
Well, no, that isn't what happens at all.
First off, the files where new functions get moved, if they get moved at all, are almost certainly going to be in the same "package" (whatever that means in the programming language in use). The idea that it might be hard to find the implementation code for something not in the current file, is pretty close to being a problem unique to C and C++. And I'm pretty sure modern IDEs have no problem dealing with that anyway.
Second, it absolutely does not "increase the cognitive load by a factor of 12". In my extensive experience, the cognitive load is decreased significantly. Because now the functions have names; the steps in the process are labelled. Because now you can consider them in isolation — the code for the adjacent steps is far easier to ignore.
Why would you "have to debug through to see where it goes"? Again, the functions have names. If the process really is purely sequential, then the original function now reads like a series of function calls, each naming a step in the sequence. It's now directly telling you what the code does and how. And it's also directly telling you "where it goes": to the function that was called, and back.
You also no longer have to read comments interspersed into a longer code flow, or infer logical groupings into steps. You can consider each step in isolation. The grouping is already done for you — that's the point. And if you aren't debugging a problem, that implies the code currently works. Therefore, you don't need to go over the details all at once. You are free to dig in at any point that tickles your curiousity, or not. You don't have to filter through anything you aren't interested in.
(Notice how in the three paragraphs above, I give one-sentence descriptions in the first paragraph of individual advantages, and then dedicate a separate paragraph to expanding on each? That is precisely the same idea of "using short functions", applied to natural language. A single, long paragraph would have been fewer total words, but harder to read and understand, and less coherent.)
All of that said, you don't really debug code primarily by single-stepping through long functions, do you? I find problems by binary search (approximately, guided by intuition) with breakpoints and/or logging. And when the steps are factored out into helper functions, it becomes easier to find natural breakpoints in the "main" function and suss out the culprit.
Shorter functions absolutely do have the properties described in the quote. Almost definitionally so. Nobody really groks code on the level of dozens of individual statements. We know brains don't work like that (https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...).
A robust design is one that not only is correct, but also ensures the functionality even when boundary conditions deviate from the ideal. It's a mix of stability, predictability and fault tolerance. Probably "reliable" can be used as a synonym.
At the same time, in all industries except CS "safety" has a specific meaning of not causing injuries to the user.
In the design of a drill, for example, if the motor is guaranteed to spin at the intended rpm independently of orientation, temperature and state of charge of the battery, that's a robust design. You'll hear the word "safe" only if it has two triggers to ensure both hands are on the handles during operation.
It's a very soft distinction, I agree. And possibly one that translates less well to the physical world where wear and tear are a big factor for robustness. You can't prove an engine to be safe after thousands of hours. But you can make it robust against a lot of expected stuff it will encounter over those hours. Safety features tend to be more about protecting people than the equipment.
Basically, safety is "bad things won't happen" and liveness is "good things eventually happen".
This is almost always the way safety is used in a CS context.
FWIW "safety factors" are an important part of all kinds of engineering. The term is overloaded and more elusive in CS because of the protean qualities of algorithmic constructs, but that's another discussion.
I agree it's confusing when you step outside of CS world though.
That makes sense, as CS is transversal to industries. Same practices that can contribute to literally save lifes in one domain will just avoid minor irritating feeling in an other.
The most important advice one can give to programmers is to
1. Know your problem domain.
2. Think excessively deep about a conceptual model that captures the
relevant aspects of your problem domain.
3. Be anal about naming your concepts. Thinking about naming oftentimes
feeds back to (1) and (2), forming a loop.
4. Use a language and type system that is powerful enough to implement
previous points.But that last 10% of checking may be really hard to encode in types. It may be especially hard to do so in the language that you want to use for other reasons.
I work on financial data processing where you genuinely have 15 sequential steps that must run in exact order: parse statement, normalize dates, detect duplicates, match patterns, calculate VAT, validate totals, etc. Each step modifies state that the next step needs.
Splitting these into separate functions creates two problems: (1) you end up passing huge context objects between them, and (2) the "what happens next" logic gets scattered across files. Reading the code becomes an archaeology exercise.
What I've found works better: keep the orchestration in one longer function but extract genuinely reusable logic (date parsing, pattern matching algorithms) into helpers. The main function reads like a recipe - you can see the full flow without jumping around, but the complex bits are tucked away.
70 lines is probably fine for CRUD apps. But domains with inherently sequential multi-step processes sometimes just need longer functions.
I do not work in finance, but I've written some exceptionally complex business logic this way. With a single public orchestrator function you can just leave the private functions in place next to it. Readability and testability are enhanced by chunking out each step and making logic obvious. Obviously this is a little reductive, but what am I missing?
The nuance: the context struct approach works well when steps are relatively independent. It gets messy when step 7 needs to conditionally branch based on something step 3 discovered, and step 12 needs to know about that branch. You end up with flags and state scattered through the struct, or you start passing step outputs explicitly, and the orchestrator becomes a 40-line chain of if/else deciding which steps to run.
For genuinely linear pipelines (parse → transform → validate → output), private functions + orchestrator is clean. For pipelines with lots of conditional paths based on earlier results, I've found keeping more in the orchestrator makes the branching logic visible rather than hidden inside step functions that check context.flags.somethingWeird.
Probably domain-specific. Financial data has a lot of "if we detected X in step 3, skip steps 6-8 and handle differently in step 11" type logic.
Sounds simple until you hit the real-world mess:
1. *Ambiguous descriptions*: "CARD 1234 AMAZON" could be office supplies, inventory, or someone's personal expense on the company card. Same vendor, completely different accounting treatment.
2. *Sequential dependencies*: You need to detect transfers first (money moving between your own accounts), because those shouldn't hit expense/income at all. But transfer detection needs to see ALL transactions across ALL accounts before it can match pairs. Then pattern matching runs, but its suggestions might conflict with the transfer detection. Then VAT calculation runs, but some transactions are VAT-exempt based on what pattern matching decided.
3. *Confidence cascades*: If step 3 says "70% confident this is office supplies," step 7 needs to know that confidence when deciding whether to auto-post or flag for review. But step 5 might have found a historical pattern that bumps it to 95%. Now you're tracking confidence origins alongside confidence scores.
4. *The "almost identical" trap*: "AMAZON PRIME" and "AMAZON MARKETPLACE" need completely different treatment. But "AMZN MKTP" and "AMAZON MARKETPLACE" are the same thing. Fuzzy matching helps, but too fuzzy and you miscategorize; too strict and you miss obvious matches.
5. *Retroactive corrections*: User reviews transaction 47 and says "actually this is inventory, not supplies." Now you need to propagate that learning to similar future transactions, but also potentially re-evaluate transactions 48-200 that already processed.
The conditional branching gets gnarly because each step can short-circuit or redirect later steps based on what it discovered. A clean pipeline assumes linear data flow, but this is more like a decision tree where the branches depend on accumulated state from multiple earlier decisions.
I've been bitten by this. It's not the length that's the problem, so much as the surface area which a long function has to stealthily mutate its variables. If you have a bunch of steps in one function all modifying the same state, there's a risk that the underlying logic which determines the final value of widely-used, widely-edited variables can get hard to decipher.
Writing a function like that now, I'd want to make very sure that everything involved is immutable & all the steps are as close to pure functions as I can get them. I feel like it'd get shorter as a consequence of that, just because pure functions are easier to factor out, but that's not really my objective. Maybe step 1 is a function that returns a `Step1Output` which gets stored in a big State object, and step 2 accesses those values as `state.step1Output.x`. If I absolutely must have mutable state, I'd keep it small, explicit, and as separate from the rest of the variables as possible.
The discipline tax is real though. Pure functions are easier to test in isolation but harder to trace when you're debugging "why did this transaction get coded wrong" and the answer spans 6 different step outputs.
That's definitely a pain, but I'm not sure it's easier when this is one variable being mutated in six different places. I think you're just running into the essential complexity of the problem.
The immutable version is probably easier to debug in practice since you can inspect each step's output independently. The "6 places" complaint was more about cognitive load during debugging than actual difficulty - you're jumping between files instead of scrolling through one. But that's a tooling/IDE problem, not an architecture one.
We're in Node.js so the best we can do is runtime checks and careful typing. I've experimented with builder patterns that sort of approximate this - each method returns a new type that only exposes the valid next operations - but it's clunky compared to proper typestate.
The real benefit isn't just preventing out-of-order calls, it's making invalid states unrepresentable. Half our bugs come from "somehow this transaction reached step 9 without having the field that step 5 should have populated."
Where I've seen it go sideways is when it accumulates process state: wasTransferDetected, skipVATCalculation, needsManualReview, originalMatchConfidence. Now you have a data object that's also a control flow object, and understanding the code means understanding which flags get set where and what downstream checks them.
Your point about discrete success/failure conditions is well taken though. We moved toward exactly that - each phase either returns its result or an explicit error, and the orchestrator handles the failures instead of stuffing error flags into the context for later. Bit more boilerplate but much easier to reason about.
Concrete example: "is this a transfer between accounts?" is pure business logic - takes a transaction and a list of bank accounts, returns true/false. That gets its own function with its own tests.
But "if it's a transfer, skip VAT calculation and use a different account mapping" is orchestration. Pushing that into the transfer detection function means it now needs to know about VAT and account mapping, which breaks isolation. Keeping it in the primary method means you can see all the skip/branch decisions in one place.
The spaghetti risk is real. What helps: keeping the orchestrator as declarative as possible. "if transfer detected, result = handleAsTransfer()" rather than inline logic. The primary method becomes a readable list of conditions and delegations, not nested logic.
No technical debt is not the same thing as “perfection”. Good enough doesn’t mean perfect.
Would it be ok to submit an essay with only 90% of the underlined spelling mistakes fixed? Do you paint your outdoor table but leave the underside for later?
Do it once, do it right. That doesn’t mean perfect, it means not cutting corners.
“Zero tech debt” is an impossibility. The most elegant solutions incur some kind of tech debt, it’s just less than others. More realistic than “zero tech debt” is a continuing dedication to addressing tech debt combined with using implementations that minimize “one way doors”.
I don't know how this philosophy is applied at TigerBeetle. When I establish engineering guidelines I try to frame them as exactly that: guidelines. The purpose is to spawn defensible reasoning, and to trigger reflection.
For example, I might say this:
We use a heuristic of 70 lines not as a hard limit, but as a "tripwire." If you cross it, you are not 'wrong,' but you are asked to pause and consider if you're introducing unintentional complexity. If you can justify it, keep it—there's no need to code golf.
"Style," "philosophy," "guides," they're all well-meaning and often well-informed, but you should be in command of the decision as the developer and not forget your own expertise around cohesion, cognitive load, or any functional necessities.There are staunch believers in gating deploys based solely on LOCs, I'm sure... I like the idea of finding ways to transparently trigger cognitive provocations in order for everyone to steer towards better code without absolutes.
Do you not use word wrap? The downside of this rule is that vertical scrolling is increased (yes, it's easier, but with a wrap you can make that decision locally) and accessibility is reduced (and monitors are wide, not tall), which is especially an issue when such a style is applied to comments so you can't see all the code in a single screen due to multiple lines of comments in that long formal grammatically correct style
Similarly, > Limit function length: Keep functions concise, ideally under 70 lines.
> and move non-branching logic to helper functions.
Break accessibility of logic, instead of linearly reading what's going on you have to jump around (though popups could help a bit). While you can use block collapse to hide those helper blocks without losing their locality and then expand only one helper block.
It’s easier to hide comments than do code wrapping correctly. And comments are usually in a lighter color than code and easy to skip over.
> Break accessibility of logic, instead of linearly reading what's going on you have to jump around (though popups could help a bit)
Extracting function is for abstraction, instead of deciphering some block and tracking its entanglement, you have a nice name and a signature that document its input and output. And maybe a nice docstring that summarizes its purpose.
It does require taste and design skill.
How does color and the ease of skipping address the fact that you don't see code because it's pushed off screen? Especially compared to the ease of skipping when you wrap the whole paragraph of comments into a single long line when you don't need to read it.
> And maybe a nice docstring that summarizes its purpose. Which you could just as well have right there where you read the block.
The nice name/signature yes, that can be situationally better
> Which you could just as well have right there where you read the block.
A function forces encapsulation and make explicit the external factors. A block doesn’t make clear which variables in the parent scope it’s using or if everything in the block is one logical step.
If you're risking money and time, can you really justify this?
- 'writing code that works in all situations'
- 'commitment to zero technical debt'
- 'design for performance early'
As a whole, this is not just idealist, it's privileged.
- 'commitment to zero technical debt'
- 'design for performance early'
Will save you time and cost in designing, even in the relatively near term of a few months when you have to add new features etc.
There's obviously extremes of "get something out the door fast and broken then maybe neaten it up later" vs "refactor the entire codebase any time you think soemthing could be better", but I've seen more projects hit a wall due to leaning to far to the first than the second.
Either way, I definitely wouldn't call it "privileged" as if it isn't a practical engineering choice. That seems to judt frame things in a way where you're already assuming early design and commitment to refactoring is a bad idea.
Time spent, monetary cost, and uncertainty, are all practical concerns.
An engineering problem where you can ignore time spent, monetary cost, and uncertainty, is a privileged position. A very small number of engineering problems can have an engineering philosophy that makes no mention of these factors.
That’s great and all as an ideal but realistically impossible so if you don’t have anything more substantial to offer then you aren’t really worth taking seriously.
You can get a surprising amount done when you aren’t spending 90% of your time fighting fires and playing whack-a-mole with bugs.
I'm not saying the principles themselves are poor, but I don't think they're suitable for a commercial environment.
I think the point is well made though. When you're building something like a transactions database, the margin for error is rather low.
But ofc, I understand styleguides are... well.. guides. Not law.
The real joy of having coding standards, is that it sets a good baseline when training junior programmers. These are the minimum things you need to know about good coding practice before we start training you up to be a real programmer.
If you are anything other than a junior programmer, and have a problem with it, I would not hire you.
In languages with TCO (e.g. Haskell, Scheme, OCaml, etc.) the compiler can rewrite to a loop.
Some algorithms are conceptually recursive and even though you can rewrite them, the iterative version would be unreadable: backtracking solvers, parsing trees, quicksort partition & subprblems, divide-and-conquer, tree manipulation, compilers, etc.
1. 100% code coverage 2. 100% branch coverage 3. 100% lint (without noqa) 4. 100% type check pass(for python/js) 5. 100% documentation coverage 6. All functions with complexity less than 5. All functions with no of lines less than 70. All files with number of lines less than 1000.
These make code high quality, and quality of life is directly proportional to qualify of your code.
https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...
I’ll let others speak to the Barr standard.
Why?
1. Dynamic memory allocation is error prone. Mixing it in with your control flow makes that hard to manage.
Many of the strategies for keeping track of dynamic memory add significantly to the complexity of the program... You need to consider object ownership, maybe introduce reference counting, even garbage collection, or "borrow checkers".
If you can avoid all of that by making good architectural choices up front then your code will be much simpler and robust.
2. Dynamic allocation is slow, especially in a multithreaded environment. It's doubly slow if you layer garbage checking or whatever on top of it.
Prefer fewer trips to the allocator, preferably none in your code's hot path.
You would need so much more physical memory with this approach. For example, you may want to allocate a lot of memory for the browser, but then close the tab. 2 seconds later you want to use it for a compiler. And then you switch to a code editor.
If this topic floats your boat, go look up the NASA coding standards. For a few projects, I tried to follow a lot of their flow control recommendations, and will still reach for: `while ... && LIMIT > 0` in some situations.
Still a huge fan of including some type info in the variable name, eg: duration_s, limit_ms makes it extremely clear that you shouldn't mix math on those integers.
To say it differently: if you wrote code that was perfect in time 0, that code may become legacy in time 100.
Are they saying you should continuously refactor all your code to cover the 'current user needs'?
I just think it's an oversimplification for those cases where you don't mind not covering the 0,001% of use cases.
Doing good design is off course important, but on the other hand software design is a lot of times iterative because of unknown unknown s. Sometimes it can be better to create quick prototype(s) to see which direction is the best to actually "do it right", instead of spending effort designing something that in the end won't be build.
For many shops it’s too much effort for the payoff. Unless you work on medical devices or aerospace then maybe.
Your situation will warrant some of these things, or doing them to a lesser degree. Part of our role is to decide and recommend what’s appropriate.
“Rugged” describes software development organizations that have a culture of rapidly evolving their ability to create available, survivable, defensible, secure, and resilient software.
https://github.com/rugged-software/rugged-software.github.io
The usual BS... yes, shorter functions are easier to understand by themselves but what matters, especially when debugging, is how the whole system works.
Edit: care to refute? Several decades of experience has shown me what happens. I'm surprised this crap is still being peddled.
It's a reasonable guideline. Juniors won't do this automagically.
There are far more real developers to learn good practices from. Brian Kernighan, Dennis Ritchie, Linus Torvalds, etc.
Don't get me wrong, I often apply it myself and refactor code into smaller functions. But readability, understandability and ease of maintenance comes first.
Especially juniors will apply such rules overzealously, bending over backwards to stay within an arbitrary limit while at the same time not really understanding why.
Frustratingly, their lack of experience makes it impossible to discuss such dogmatic rules in any kind of nuanced way, while they energetically push for stricter linter rules etc.
I've tried, and one even argued there's never ever a reason to go over N lines because their book is "best-practice" and said so. And you should not deviate from "best-practice" so that's it. I'm not making this up!
That said, I'm always open to discuss the pros and cons for specific cases, and I do agree the default should be to lean towards smaller functions in 90% of cases.
Any recommendations for other coding philosophies or "first principle" guides? I know of "extreme programming" but not much else.
There was an interesting debate between John and Uncle Bob on their differences in style recently[1], with a related HN discussion[2].
[1] https://github.com/johnousterhout/aposd-vs-clean-code/blob/m...
> Do it right the first time
So easy, why didn't I think of that!? /s
Reminds me of the mental health meme of telling depressed people to just be happier instead.
I don't necessarily disagree with a lot in this philosophy, but much of it is puffery if not accompanied by practical positive and negative examples. If a junior with little experience reads this, I'm not sure if they'll be better or worse off.
For example, "Design for performance early" is dangerous if it leads to premature optimization. But that's not mentioned. Practical positive and negative examples that illustrate the balance between these two concerns would make the advice actionable.
Won't happen. This is for TigerBeetle.
This is a common misconception.
Say, you have a process that is single threaded and does a lot of stuff that has to happen step by step.New dev comes in; and starts splitting everything it does in 12 functions, because, _a function, should do one thing!_ Even better, they start putting stuff in various files because the files are getting too long.
Now you have 12 functions, scattered over multiple packages, and the order of things is all confused, you have to debug through to see where it goes. They're used exactly once, and they're only used as part of a long process. You've just increased the cognitive load of dealing with your product by a factor of 12. It's downright malignant.
Code should be split so that state is isolated, and business processes (intellectual property) is also self contained and testable. But don't buy into this "70 lines" rule. It makes no sense. 70 lines of python isn't the same as 70 lines of C, for starters. If code is sequential, and always running in that order and it reads like a long script; that's because it is!
Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance. These things weren't clear to me the first 10 years, but after 30 years, I've made those conclusions. I hope other old-timers can chime in on this.
The length of functions in terms of line count has absolutely nothing to do with "a more modular and maintainable codebase", as explained in the manifesto.
Just like "I committed 3,000 lines of code yesterday" has nothing to do with productivity. And a red car doesn't go faster.