Have you tried giving it coding standards and other such preferences about how you like your code to be organized? I’ve found that coding agents can be quite adaptable to various styles, you can put stuff like “try to keep functions less than 100 lines long” or “include assertions validating all function inputs” into your coding agent’s general instructions and it’ll follow them.
For me, one of the things that’s a huge fundamental improvement is telling the agent to create and run unit tests for everything. That way when it does mess up accidentally it can immediately catch the problem and usually fixes it in the same session without further intervention. Unit tests used to be more trouble than they were worth most of the time, now I love them.
After I worked with AI agents a little, I dove in with a big set of coding standards and practices and… I overdid it. I find I get better results by starting off with a “light touch” and letting it do what it wants, then correcting where it gets off track (like using python for something that needs efficient performance…)
No, I’ve used them plenty before. I just found them to generally be a huge hassle of minimal benefit. They became much more useful in the context of agentic coding, where you want the agent to be able to immediately realize “oh, this change I made causes these specific problems when it’s run.” The hassle is all on the agent, not on me.
So much this. That hassle on the agent, a few minutes of me waiting for it to crunch out the unit tests, saves me tons of hassle later - not going in circles re-fixing problems that were fixed before.
Same for keeping implementation code and documentation in sync - I’ve got hundreds of out-of-date wiki pages that simply aren’t worth my time to fix. But when it’s the agent keeping the docs in sync, just tell it to do it and wait a few minutes - totally worth the effort.
Could be. I’m a professional programmer whose usage runs the whole gamut - large applications with hundreds of programmers working on them for years, smaller apps that I make for my own use, and one-off scripts to do some particular task and then generally throw away afterwards.
I don’t do unit tests for that last category, of course. I don’t even use coding agents for those, generally speaking - a bit of back-and-forth in a chat interface is usually enough there.
Is this like a who’s got a bigger portfolio situation? I’m not sure how to respond
I guess I’ve been developing for decades including consulting for Page 6, a stint in RD at Sony Music. One of my open source contributions was used as part of the backend for one of Obama’s State of the Unions. I spend my time these days writing and maintaining multiple software stacks integrating across multiple platforms.
Since you brought up the notion that we might be doing different styles of development, I was giving you context as to the kinds of development that I do. Sounds like we might not be doing such different scales of development after all, but I couldn’t have known that until you gave that information just now.
This isn’t supposed to be some kind of duel or argument, I don’t see the point of that. I’m just explaining my usage of coding agents and specifically unit tests in that context. Since that’s what you were questioning.
Anyways, I couldnt possibly deploy with any confidence a large project or honestly a small project I expected someone to rely on without layers of test. Unintended consequences of even a small change are just a reality. And with the expectation to move quick with large legacy systems, if you don’t have tests that’s a dangerous high wire act.
I couldnt possibly deploy with any confidence a large project or honestly a small project I expected someone to rely on without layers of test.
In my world, that depends just about entirely upon how “dynamic” the code base is expected to be after release. We send a lot of things into the field, thousands of copies used for important work, which we pretty much know certain aspects of the system are unlikely to be changed once released. Others are very likely to be changed. “Back in the day” we’d make reasoned judgement calls about which ones would benefit from the effort of unit / integration testing and which ones that effort would be better invested elsewhere. As time marches on, our procedures and cross-departmental “advisors” who aren’t so cozy with the code are relentlessly pushing for more and more automated testing. It is safer, no argument, but it also delays launch - sometimes without added value IMO.
Well, I’ve seen large projects without extensive unit tests before. The main time I remember a big project with them before coding agents they were largely a checkbox that developers implemented with a grumble when first deploying a new system and then that were slowly disabled one by one as later changes broke them.
These were stand-alone projects, though, with a large QA department and without an expectation of future versions directly descended from them once deployed. If it worked then it worked, that was all that was needed at the end of the day.
We have ours configured with our coding standards, mcps, and we have a skill library.
It still outputs code full of mistakes. Usually they’re minor mistakes, but not always.
When we use it to fix defects, it usually fixes the problem, but not in a very robust way. It still needs a lot of supervision to output quality code. For example it will often spot fix defects instead of applying the principle of the code fix to other areas that also need it (i.e. we needed to normalize some data but it only did it in one place, because the ticket only mentioned that one place, however that data is used elsewhere as well)
It’s a helpful tool for sure but it’s rare that I don’t need to make corrections
I’ll say that during a recent week where I was forced to use an LLM, I found Claude Opus to be extremely poor at referencing this guide: https://mywiki.wooledge.org/BashPitfalls
it took almost an hour to get Claude to write me a shell script which I considered to be of acceptable quality. It completely hallucinated about several of the points in that guide, requiring me to just go read the guide myself to verify that the language model was falsifying information. That same task would have taken me about 5 minutes.
I believe that GIGO applies here. 99% of shell scripts on the internet are unsafe and terrible (looking at you, set -euo pipefail), and Claude is much more likely to generate god awful garbage because of the inherent bias present in the training data.
And as for unit tests? Imo, anything other than property-based testing is irrelevant. If you’re using something like Pydantic, you can auto-generate a LOT of your tests using the rich type annotations available in that library along with hypothesis. I tend to write a testing framework once, and then special case property tests for things that fall outside of my models. None of this is super helpful for big ugly codebases with a lot of inertia around practices, but that’s not been my environment, thankfully.
I have found Sonnet and Opus to both be very capable in bash, but then, I don’t usually ask bash to do super-complex things - its syntax is just too screwy to think about big applications in it.
I will say, you might be misguiding the LLM by filling it full of bad examples before starting. Kind of like the advice about not staring at a tree downslope while skiing, if you’re fixated on it you’re MORE likely to hit it.
Shellcheck, while good, doesn’t capture all best practices in my opinion. There are many items in that doc which shellcheck would happily allow, worst of all being set -euo pipefail.
Have you tried giving it coding standards and other such preferences about how you like your code to be organized? I’ve found that coding agents can be quite adaptable to various styles, you can put stuff like “try to keep functions less than 100 lines long” or “include assertions validating all function inputs” into your coding agent’s general instructions and it’ll follow them.
For me, one of the things that’s a huge fundamental improvement is telling the agent to create and run unit tests for everything. That way when it does mess up accidentally it can immediately catch the problem and usually fixes it in the same session without further intervention. Unit tests used to be more trouble than they were worth most of the time, now I love them.
After I worked with AI agents a little, I dove in with a big set of coding standards and practices and… I overdid it. I find I get better results by starting off with a “light touch” and letting it do what it wants, then correcting where it gets off track (like using python for something that needs efficient performance…)
You… just started writing unit tests?
No, I’ve used them plenty before. I just found them to generally be a huge hassle of minimal benefit. They became much more useful in the context of agentic coding, where you want the agent to be able to immediately realize “oh, this change I made causes these specific problems when it’s run.” The hassle is all on the agent, not on me.
So much this. That hassle on the agent, a few minutes of me waiting for it to crunch out the unit tests, saves me tons of hassle later - not going in circles re-fixing problems that were fixed before.
Same for keeping implementation code and documentation in sync - I’ve got hundreds of out-of-date wiki pages that simply aren’t worth my time to fix. But when it’s the agent keeping the docs in sync, just tell it to do it and wait a few minutes - totally worth the effort.
I think we do very different development.
Could be. I’m a professional programmer whose usage runs the whole gamut - large applications with hundreds of programmers working on them for years, smaller apps that I make for my own use, and one-off scripts to do some particular task and then generally throw away afterwards.
I don’t do unit tests for that last category, of course. I don’t even use coding agents for those, generally speaking - a bit of back-and-forth in a chat interface is usually enough there.
Is this like a who’s got a bigger portfolio situation? I’m not sure how to respond
I guess I’ve been developing for decades including consulting for Page 6, a stint in RD at Sony Music. One of my open source contributions was used as part of the backend for one of Obama’s State of the Unions. I spend my time these days writing and maintaining multiple software stacks integrating across multiple platforms.
Since you brought up the notion that we might be doing different styles of development, I was giving you context as to the kinds of development that I do. Sounds like we might not be doing such different scales of development after all, but I couldn’t have known that until you gave that information just now.
This isn’t supposed to be some kind of duel or argument, I don’t see the point of that. I’m just explaining my usage of coding agents and specifically unit tests in that context. Since that’s what you were questioning.
I see it seemed more like a weird flex.
Anyways, I couldnt possibly deploy with any confidence a large project or honestly a small project I expected someone to rely on without layers of test. Unintended consequences of even a small change are just a reality. And with the expectation to move quick with large legacy systems, if you don’t have tests that’s a dangerous high wire act.
In my world, that depends just about entirely upon how “dynamic” the code base is expected to be after release. We send a lot of things into the field, thousands of copies used for important work, which we pretty much know certain aspects of the system are unlikely to be changed once released. Others are very likely to be changed. “Back in the day” we’d make reasoned judgement calls about which ones would benefit from the effort of unit / integration testing and which ones that effort would be better invested elsewhere. As time marches on, our procedures and cross-departmental “advisors” who aren’t so cozy with the code are relentlessly pushing for more and more automated testing. It is safer, no argument, but it also delays launch - sometimes without added value IMO.
I meant my first sentence to be an apology for jumping to conclusions but it clearly isn’t. It’s late. Sorry for the snarky response.
Well, I’ve seen large projects without extensive unit tests before. The main time I remember a big project with them before coding agents they were largely a checkbox that developers implemented with a grumble when first deploying a new system and then that were slowly disabled one by one as later changes broke them.
These were stand-alone projects, though, with a large QA department and without an expectation of future versions directly descended from them once deployed. If it worked then it worked, that was all that was needed at the end of the day.
Wow what a circlejerk this turned into.
Oh well, I guess that’s what everything really is the whole time.
We have ours configured with our coding standards, mcps, and we have a skill library.
It still outputs code full of mistakes. Usually they’re minor mistakes, but not always.
When we use it to fix defects, it usually fixes the problem, but not in a very robust way. It still needs a lot of supervision to output quality code. For example it will often spot fix defects instead of applying the principle of the code fix to other areas that also need it (i.e. we needed to normalize some data but it only did it in one place, because the ticket only mentioned that one place, however that data is used elsewhere as well)
It’s a helpful tool for sure but it’s rare that I don’t need to make corrections
I’ll say that during a recent week where I was forced to use an LLM, I found Claude Opus to be extremely poor at referencing this guide: https://mywiki.wooledge.org/BashPitfalls
it took almost an hour to get Claude to write me a shell script which I considered to be of acceptable quality. It completely hallucinated about several of the points in that guide, requiring me to just go read the guide myself to verify that the language model was falsifying information. That same task would have taken me about 5 minutes.
I believe that GIGO applies here. 99% of shell scripts on the internet are unsafe and terrible (looking at you,
set -euo pipefail), and Claude is much more likely to generate god awful garbage because of the inherent bias present in the training data.And as for unit tests? Imo, anything other than property-based testing is irrelevant. If you’re using something like Pydantic, you can auto-generate a LOT of your tests using the rich type annotations available in that library along with hypothesis. I tend to write a testing framework once, and then special case property tests for things that fall outside of my models. None of this is super helpful for big ugly codebases with a lot of inertia around practices, but that’s not been my environment, thankfully.
WTF are you expecting Claude to code in bash?
I have found Sonnet and Opus to both be very capable in bash, but then, I don’t usually ask bash to do super-complex things - its syntax is just too screwy to think about big applications in it.
I will say, you might be misguiding the LLM by filling it full of bad examples before starting. Kind of like the advice about not staring at a tree downslope while skiing, if you’re fixated on it you’re MORE likely to hit it.
Why not just give it shellcheck and have it run that on every script it creates?
Shellcheck, while good, doesn’t capture all best practices in my opinion. There are many items in that doc which shellcheck would happily allow, worst of all being
set -euo pipefail.Sounds like you were writing bad unit tests and AI showed you how to do it right.
If so, it was project-wide across hundreds of devs.
There was a time when nobody wrote unit tests, not so long ago, really.