Why Business Incentives Still Strip Software Engineers of Professional Agency
Process & People, Revisited
The global outage that rolled across multiple banks and airlines this spring offered another reminder that software is civil infrastructure. Boarding passes stopped scanning, hospitals diverted ambulances, and capital markets throttled to half-speed. In each failure report, the proximate cause was a few hundred lines of code, yet the structural cause was older and colder: the people who wrote or reviewed that code never held the legal authority to overrule deadlines or cost caps when safety indicators flashed red.
Since I first argued this point in 2024, the empirical record has only hardened. The relationship between capital and code remains adversarial because financial incentives reward speed, margin, and optionality, while software quality and safety generate liabilities that can be externalized. Nothing in that equation indicts the individual developer; it indicts the governance vacuum that surrounds them. My purpose here is to map that enclosure precisely, trace its origins, and sketch a path towards professional agency that aligns software engineering with the safety standards already applied to concrete, steel, water, and electricity.
Licensed Authority Versus Discretionary Goodwill
Civil, mechanical, and electrical engineers operate under statutes that impose a duty of care. They graduate from accredited programs, pass comprehensive examinations, and log apprenticeship years before earning a licence that carries personal liability. A stamped drawing fixes responsibility on a named engineer; if the design collapses, the engineer and the firm face criminal jeopardy and civil damages. This legal structure does not guarantee perfection, but it realigns corporate incentives by making safety violations too expensive to ignore.
Software engineers, by contrast, practice under a patchwork of corporate coding guidelines, voluntary standards, and incomplete sectoral rules. A developer who discovers that a payment API leaks credentials can be overruled by a product manager who fears a missed release window. Should the breach occur, victims sue the company, insurers absorb most of the payout, and developers move on to the next sprint. British Columbia broke that mould in 2021 by bringing software within its Professional Governance Act, yet outside that jurisdiction the engineer’s refusal right is still a matter of employer goodwill. Since 2021 the number of BC-licensed software engineers has tripled each year (from 14 to 126), and reciprocity talks with Ontario and Washington are already on the EGBC docket: small, but unmistakably exponential.
How the Vacuum Was Engineered
The void is not incidental. At the height of the dot-com boom, major software employers lobbied to keep coding outside professional statutes and used no-poach agreements, H-1B dependency, and offshore outsourcing to fragment labour power. Eleven executives from Google, Apple, and Intel eventually settled antitrust claims for wage suppression, but the practice served its purpose: it kept software wages high enough to attract talent while low enough to preserve margin, and it prevented any credible push for licensure. By 2024, fewer than two percent of workers in technical and professional services belonged to unions, compared with nearly twenty-seven percent of civil engineers.
The argument that software is “different” rested on three claims. First, defects are patchable after release, unlike a cracked bridge girder. Second, most software failures impose inconvenience rather than bodily harm. Third, regulatory friction would stifle innovation. All three claims are now empirically weak. Boeing’s 737 MAX crashes, caused in part by an ill-governed control algorithm, proved that code can kill at scale. The MOVEit breach, which exposed data from universities to pension funds, showed that intangible harm becomes concrete when identities are stolen. Finally, the steepest declines in software productivity have come not from regulation but from technical debt incurred under the move-fast-break-things ethos.
The Cost-Externalisation Cycle
Because most jurisdictions treat software defects as contractual matters, software businesses can shift the cost of failure onto consumers and insurers. A health-care scheduling outage delays chemotherapy, yet liability is diffused across multiple service providers. Cyber-insurance, now a standard line item, converts this existential risk into predictable premium payments. Boards rationally conclude that speed to market yields higher net present value than stakeholder safety, particularly when venture financing rewards growth optics under a two-year time horizon. The developer becomes a fungible cost centre whose work must be accelerated, sliced, and eventually replaced by automation or cheaper talent pools.
Directors comfort themselves with D&O cover, Delaware’s business-judgment rule, and §102(b)(7) exculpation clauses. Yet Side-A policies increasingly exclude cyber-recklessness, and the SolarWinds (2024) and Blackbaud (2025) derivative settlements show carriers refusing indemnity where boards ignored documented security and safety gaps. Caremark’s 2024 extension to officers pierces the armour further: a mis-scoped CISO job description or sprint plan is now a self-authenticating exhibit that directors possessed actual knowledge of governance failure. Fiduciary duty is not dead: it is deferred until discovery, when liability snaps back with compound interest.
Counter-Currents
Organised Labour: The Alphabet Workers Union (2021) launched in the heart of Big Tech; ZeniMax QA (2025) and the Overwatch Gamemakers Guild (2025) proved games are not an outlier; and in April 2025 a United Tech Workers local at a Fortune-500 SaaS firm ratified the first enterprise-software CBA covering release-quality metrics.
Stamped-Drawing Analogues: DO-178C, ISO 26262, and FDA software guidance impose traceability regimes indistinguishable from sealed-drawing practice.
The Trust-Value Discount: Marsh’s 2025 Cyber Catalyst bulletin lists a 22 % premium reduction for firms that can show PE-style review on critical code paths; a recent mid-market M&A deal priced a seven-percent valuation haircut when the target lacked such evidence. Enterprises able to demonstrate sealed-drawing-level assurance on cloud security already command lower cyber-insurance deductibles and faster enterprise-sales cycles.
A Tiered Model For Professional Authority
Extending full professional licensure to every line of code is politically impossible and strategically unnecessary. Most user-interface tweaks do not warrant the same scrutiny as a flight-control loop. What matters is to define criticality tiers, tie each tier to clear assurance obligations, and allocate personal liability accordingly. The top tier, encompassing life-critical and systemic-risk software, should require a licensed Professional Software Engineer who can refuse unsafe directives without retaliation. The middle tiers can adopt a chartered-firm model in which audited process maturity substitutes for individual licensure. The bottom tier remains an arena for rapid experimentation, subject only to consumer-protection law. Think of it as the software analogue to building codes: skyscrapers need stamped drawings; DIY garden sheds do not, and yet Home Depot flourishes.
Executives will ask who pays for slower cycles and higher assurance. The empirical answer is the same one delivered by civil engineering a century ago: the public already pays, through failures that destroy value. IBM’s 2024 Cost of a Data Breach report shows median incident cost at US $4.5 million; Ponemon’s longitudinal study puts proactive security-assurance spending at roughly US $0.9-1.2 million for firms of the same size: a conservative five-to-one delta. If licensure adds five percent to engineering budgets but cuts breach probability by 40-60 percent and breach severity by 70-90 percent, the net present cost of ownership improves. Insurers and credit-rating agencies can accelerate this calculus by applying differential premiums and bond spreads to firms that meet professional software-engineering standards. Trust becomes a measurable asset with a discounted-cash-flow profile.
Aligning With Trust-Product Practice
Within the Trust Value Management framework, this alignment is straightforward. Licence-backed refusal rights convert safety into a manufacturable feature, while audit-ready evidence flows become trust artefacts that sales and investor-relations teams can present without embellishment. Business no longer treats engineering quality as a drag coefficient; it treats it as a forward-pricing mechanism that opens regulated markets and compresses sales cycles. Once quality and safety translate into revenue acceleration, the adversarial frame collapses.
Twenty-five years of agile orthodoxy taught the industry how to ship features at the speed of desire. It also taught capital that software risk is someone else’s problem. The record now shows that diffuse liability invites catastrophe and corrodes trust in digital life. The remedy is neither heroic individual resistance nor moral appeals to move slowly. It is structural: tiered professional governance, enforceable refusal rights, and financial instruments that reward demonstrable safety. Software engineers did not create their enclosure, and they cannot dismantle it alone. Legislators, insurers, CFOs, and yes, labour organizers must cooperate to align incentives with the realities of code-as-infrastructure. When they do, the engineer’s seal will migrate from paper drawings to Git repositories, and the public will gain a digital environment worthy of its trust.