The Uncomfortable Parts
The specific costs of AI-assisted development that I benefit from and haven't resolved: displacement, dependency, training data, energy, and code quality.
In my first post on this site I said there were several uncomfortable aspects of AI-assisted development that each deserved more than a paragraph. I named them (economic displacement, concentration of power, environmental cost, code quality uncertainty) and then moved on, which is the kind of thing you do when you want credit for honesty without actually doing the hard part. So here's the hard part.
I'm going to work through each of them with as much specificity as I can, because vague discomfort is easy but specific discomfort is useful. I'm also not going to resolve any of them neatly since I don't think they resolve neatly.
I'm doing the work of a small team, and that's not free
The most immediate uncomfortable truth is economic. I'm a solo developer building multiple regulated products (medical billing, media audit compliance, news intelligence) and I'm doing it with AI assistance that genuinely multiplies what one person can produce. Solo-founded startups have risen from 23.7% to 36.3% of new companies between 2019 and mid-2025 [1]. Sam Altman talked about the "one-person unicorn" in late 2023 [2], and that framing sounds aspirational until you think about what it actually means: the same (or more) output, fewer people getting paid.
The numbers are already showing up. Duolingo cut 10% of its contract workforce in January 2024 citing AI [3], then went fully "AI-first" in 2025 [4]. The tech sector saw over 150,000 layoffs in 2024 [5]. A Stanford Digital Economy Lab study found a 16% decline in early-career employment (ages 22-25) in AI-exposed occupations since late 2022, whilst workers over 30 in the same roles saw employment grow [6]. Stack Overflow's 2024 survey found 76% of developers are now using or planning to use AI tools [7], which means the baseline expectation of what one developer produces has shifted, and the people most affected are the ones already in the most precarious positions: contractors, juniors, freelancers.
I benefit from this directly. I can build things that would otherwise require a team I can't afford, which means I get to exist as a company at all. But the people who would have been on that team are competing for fewer positions. "I'm creating products that wouldn't otherwise exist" is true but feels like rationalisation when you say it too confidently, so I'm saying it with the caveat that I know what it sounds like. I'm also fortunate enough to be old enough that I have experience I can fall back on to help with the development process, and I'm not stuck trying to compete for junior roles that AI is genuinely able to replace. How are our youngsters, coming out of the years of their education, supposed to get the necessary experience to get past the junior stage to the point where they're 'employable'?
The Stanford data makes this concrete: it's not that the entire developer job market is collapsing, it's that it's collapsing for people under 25 whilst growing for people over 30 [6]. The ladder is being pulled up, and the people pulling it up are the ones who already climbed it. I'm one of those people. That's not a comfortable thing to write, but it would be dishonest not to.
I'm dependent on companies I can't influence
I build on Anthropic's API. My entire development workflow runs through Claude Code. If Anthropic changed their pricing tomorrow, or deprecated the model I've built my tooling around, or decided my use case violated some new policy, I'd be scrambling. This isn't theoretical, since OpenAI retired 33 models on a single day in January 2024 [8], and developers who'd built production systems on those models had to migrate or break. Anthropic retired the entire Claude 3.x family between October 2025 and January 2026, roughly a two-month deprecation window [9]. API pricing across the major providers dropped 60-80% in a single year [10], which sounds like good news until you realise it also means the economics of your product can shift dramatically based on decisions made in a boardroom you're not in.
The market concentration is stark. Three companies (OpenAI, Anthropic, Google) control the infrastructure that an increasing number of software products depend on. Menlo Ventures puts Anthropic at 40% of enterprise LLM spend [11], which is the segment I care about, and 94% of IT leaders report vendor lock-in as a material concern [12]. I chose Anthropic because I think their approach to safety is more serious than the alternatives, but "I picked the best option among three" isn't the same as "I'm comfortable with the structure." I'm building regulated software on top of a venture-backed company whose incentives will eventually diverge from mine in ways I can't predict.
I've spent enough time in governance to know what single-supplier risk looks like, and this is it. The fact that I've chosen this dependency deliberately, with full awareness of the risk, doesn't actually reduce the risk. It just means I can't pretend I didn't see it coming.
The training data question hasn't been answered
I wrote about this in my first post, but the specifics have moved since then. There are now 167 active AI-related lawsuits tracked across 55 defendants, with over $155 billion in disclosed stakes [13]. The New York Times lawsuit against OpenAI (filed December 2023) is in the summary judgment phase, with a judge ordering OpenAI to produce 20 million ChatGPT logs [14]. The UK High Court's ruling in Getty Images v. Stability AI (November 2025) largely rejected Getty's copyright claims, holding that AI model weights are not "infringing copies," though it found limited trademark infringement [15]. Anthropic settled the Bartz class action for $1.5 billion (the largest copyright settlement in US history, covering roughly 500,000 books downloaded from shadow libraries), with final approval scheduled for this week [16]. None of these cases have produced a definitive legal framework for code, which means the code I'm generating with AI assistance exists in a legal grey area that might not stay grey forever.
Some companies have started making gestures. Shutterstock began paying contributors from AI licensing deals and expanded the programme in 2026 [17]. BigCode's The Stack v2 uses permissive-licensed code with an opt-out tool [18], which is a step in the right direction even if "you're included unless you actively remove yourself" isn't the same as consent. But the core question (whether the people whose work trained these models were treated fairly) remains unresolved, and "unresolved" is doing a lot of heavy lifting in that sentence because what it actually means is "no, and we're hoping the lawsuits sort it out."
I use these tools knowing this. I've decided the practical value outweighs my discomfort, which is an honest statement about my priorities and not a defence of the status quo. The people who say this is creative theft at scale are making a point I can't dismiss, even if I've chosen to act differently from how that point might suggest I should.
There's a particular irony here for me. I spent years in governance and audit telling organisations that "we didn't know" is never an acceptable defence when the information was available. The information is available. I know the training data questions are unresolved, and I'm proceeding anyway, which makes me exactly the kind of risk-acceptor I used to write findings about.
The energy cost is real and growing
The International Energy Agency projects global data centre electricity consumption will hit 1,100 terawatt-hours in 2026, comparable to Japan's entire national consumption [19]. Google's 2024 environmental report showed data centre electricity consumption up 27% year over year, with water usage rising to 6.1 billion gallons [20]. Training GPT-4 consumed an estimated 50 gigawatt-hours of electricity [21], but that's almost beside the point since 80-90% of AI computing power now goes to inference, not training [22]. Every prompt, every response, adds up continuously.
Every time I send a prompt to Claude, there's an energy cost. It's small per interaction (Google published data showing a median Gemini text prompt uses 0.24 Wh, roughly equivalent to watching television for nine seconds [23]), but I send hundreds of prompts a day across multiple projects, and I'm one of millions of developers doing the same thing. The per-query triviality is genuine, and the aggregate is also genuine, and both of those things are true simultaneously, which is exactly the kind of problem that resists individual action.
The efficiency counter-argument is real (Google reduced median energy per Gemini prompt by 33x between May 2024 and May 2025 [23], and smaller models keep getting better), but efficiency improvements are running a race against adoption, and adoption is winning. I don't have a meaningful way to offset this beyond choosing efficient models when I can and being honest that it's not enough. That's an unsatisfying answer, and I'm including it precisely because it's unsatisfying. The comfortable move would be to leave this section out entirely.
AI-generated code might be making software worse
This is the one that keeps me up at night as someone who builds regulated software. A Stanford and NYU study tested GitHub Copilot across 89 code-generation scenarios and found that 40% of the output contained security vulnerabilities mapped to known CWEs (SQL injection, XSS, buffer overflows, hardcoded credentials) [24]. Veracode's 2025 report, testing over 100 LLMs, found that 45% of AI-generated code samples introduced OWASP Top 10 vulnerabilities, with Java worst at 72% failure rate [25]. Georgia Tech's Vibe Security Radar tracked 35 CVEs in March 2026 alone that were directly attributable to AI coding tools (up from 6 in January), and the researchers estimate the true count is five to ten times higher [26].
Then there's the perception gap. A peer-reviewed METR study gave 16 experienced open-source developers real tasks with and without AI tools, and found they were actually 19% slower with AI assistance. The developers themselves estimated they were 20% faster, a 39 percentage point gap between how productive they felt and how productive they were [27]. GitHub's own data shows that Copilot suggestions are accepted about 30% of the time, and developers keep 88% of what they accept [28], but the independent security research paints a different picture from GitHub's own quality claims, which is the kind of conflict of interest worth noting when the company running the study is also selling the product.
I've built extensive governance infrastructure specifically to catch these problems (CI enforcement, testing gates, diagnostic hooks, the lot) and I've written about why. But most developers haven't, and the industry-wide effect of millions of developers accepting AI-generated code without the governance infrastructure to verify it is genuinely concerning. My governance is load-bearing, which means the default (no governance) is a problem, and the fact that I've solved it for myself doesn't solve it for anyone else. Fortune 50 enterprise data shows AI-assisted developers produce commits at three to four times the rate but introduce security findings at ten times the rate [29]. That's not a productivity gain, it's a vulnerability multiplier with a speed boost on top.
Sitting with it
I don't have a unifying conclusion that ties these threads together into something comfortable. The displacement is real and I benefit from it. The dependency is real and I've chosen it anyway. The training data ethics are unresolved and I'm proceeding regardless. The energy cost is growing and I can't meaningfully offset it. The code quality risk is documented and I've built controls for it, but those controls are my advantage, not the industry's baseline.
The honest position is that I use these tools because the practical value is enormous and because the alternative (not using them) doesn't actually help the people who are being displaced or whose work was used without consent. It just means I build less. Whether that's pragmatism or rationalisation depends on where you stand, and I think reasonable people can disagree about it.
What I keep coming back to is that these aren't problems I can solve by being individually virtuous. I can build governance around AI-generated code (and I have), but that doesn't fix the industry-wide vulnerability problem. I can acknowledge the training data ethics (and I do), but acknowledgement doesn't compensate anyone. I can be aware of the energy cost (and I am), but awareness doesn't reduce it. The structural problems require structural responses, from regulators, from the AI companies, from the industry, and the honest thing to say is that most of those responses haven't arrived yet.
In the meantime I'm trying to be specific about what I'm disagreeing with myself about, because vague discomfort is too easy to live with indefinitely, and specificity is at least the beginning of accountability, even if it's not the end of it.
---
Sources
1. NxCode — One-Person Unicorn: Context Engineering Solo Founder Guide 2026 — Solo-founded startup share data (23.7% to 36.3%). 2. Fortune — Sam Altman wants AI to create a one-person unicorn — Altman quote from September 2023 conversation with Alexis Ohanian. 3. TechCrunch — Duolingo cuts 10% of its contractor workforce — January 2024 contractor cuts. 4. Entrepreneur — Duolingo Says AI Completed Work in 12 Months that Took Humans 12 Years — 2025 "AI-first" announcement. 5. Layoffs.fyi — 152,922 tech employees laid off from 551 companies in 2024. 6. Stanford Digital Economy Lab — Canaries in the Coal Mine — Peer-reviewed study using ADP payroll data; 16% decline in early-career employment in AI-exposed occupations, growth for workers 30+. Also reported by TIME, Fortune, and CNBC. 7. Stack Overflow — 2024 Developer Survey: AI — 76% using or planning to use AI tools; 62% currently using. 8. OpenAI — Deprecations — 33 models retired on January 4, 2024, including GPT-3 family. 9. LemonData — AI API Market 2026 Trends — Claude 3.x retirement timeline (October 2025 to January 2026). 10. TokenMix — AI API Pricing War 2026 — 60-80% price drops across major providers in a single year. 11. Visual Capitalist — Ranked: AI Models U.S. Businesses Pay For — Menlo Ventures data on enterprise LLM spend shares (Anthropic 40%, OpenAI 27%, Google 21%). 12. Amundson Strategic — What Your Board Doesn't Know About AI Vendor Lock-In — 94% of IT leaders cite vendor lock-in as a material concern. 13. AI Lawsuit Tracker — 167 active lawsuits, 55 defendants, $155B+ in disclosed stakes. 14. National Law Review — OpenAI Loses Privacy Gambit: 20 Million ChatGPT Logs — Judge Stein's order compelling log production. Also NPR — Judge allows NYT copyright case to go forward. 15. Latham & Watkins — Getty Images v. Stability AI: English High Court Rejects Secondary Copyright Claim — November 2025 ruling; model weights not "infringing copies." Also Mayer Brown analysis, Bird & Bird summary. 16. Authors Guild — What Authors Need to Know About the Anthropic Settlement — $1.5B settlement, ~500,000 books, final approval hearing May 14, 2026. Also NPR, Fortune, Wolters Kluwer legal analysis. 17. Shutterstock Investor Relations — Shutterstock Expands Partnership with OpenAI — Six-year licensing deal; Contributor Fund pays artists based on training dataset frequency. 18. Hugging Face — The Stack v2 — 67.5TB dataset, 600+ languages, permissive-licensed code, "Am I in the Stack" opt-out tool. 19. IEA — Data centre electricity use surged in 2025 — 1,100 TWh projection for 2026, 18% upward revision from December 2025 estimates. Also Brookings. 20. Anadolu Agency — Google data centers used nearly 6B gallons of water in 2024 — 6.1 billion gallons water, 27% electricity increase YoY. Also Data Centre Magazine, Google 2024 Environmental Report. 21. MIT Technology Review — We did the math on AI's energy footprint — GPT-4 training: ~50 GWh, 25,000 A100 GPUs, 100 days. 22. All About AI — AI Environment Statistics — 80-90% of AI computing power used for inference, not training. 23. Hannah Ritchie — AI footprint (August 2025) — Median Gemini text prompt: 0.24 Wh, 0.03g CO2, 0.26ml water; 33x energy reduction May 2024 to May 2025. Also Euronews. 24. Stanford EE — Dan Boneh and team find relying on AI is more likely to make your code buggier — Stanford/NYU study: 89 code-generation scenarios, 40% contained CWE-mapped vulnerabilities. Full paper on arXiv. 25. Veracode — GenAI Code Security Report — 100+ LLMs tested; 45% introduced OWASP Top 10 vulnerabilities; Java 72% failure rate; 86% failed XSS defence. 26. Cloud Security Alliance — AI-Generated Code Vulnerability Surge 2026 — Georgia Tech Vibe Security Radar: 35 CVEs in March 2026 attributable to AI coding tools; estimated 5-10x undercount. 27. METR — Early 2025 AI Experienced OS Dev Study — Peer-reviewed RCT: 16 developers, 246 tasks, 19% slower with AI, perceived 20% faster. Full paper on arXiv. 28. GitHub Blog — Does GitHub Copilot improve code quality? — ~30% acceptance rate; 88% retention of accepted code. Also IT Pro — GitHub: 30% of Copilot coding suggestions are accepted. 29. SQ Magazine — AI Coding Security Vulnerability Statistics — Fortune 50 enterprise data: 3-4x commit rate, 10x security finding rate with AI-assisted development.