• Pon. svi 11th, 2026

Oblak Znanja

informatička edukacija i vijesti

Stopping bugs before they ship: The shift to preventative security

ByTomšić Damjan

svi 11, 2026

sankai/iStock/Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Secure software needs to begin before coding.
  • Threat modeling helps teams catch risky assumptions early.
  • Dependency hygiene can prevent hidden supply chain risks.

Software has a lifecycle. From the spark of an idea through coding, testing, deployment, customer use, and eventual revision or retirement, each line, module, and component becomes more entrenched, more solidified as part of the overall solution, and therefore much harder to fix if problems arise later. Yet, we often fix software solely based on late-stage usage. In this article, we’ll discuss proactive strategies to prevent flaws from reaching production before deployment.

Two terms are key to this approach: secure-at-the-source and secure-by-design. Both terms refer to the process of building security and reliability into code at the earliest stage of the software lifecycle. We’ll focus on how security can be designed into all phases, from requirements and design through coding, dependency selection, build pipelines, deployment, and maintenance.

Also: The best zero trust security platforms: Secure your network perimeters with fast, secure access controls

This approach requires a mindset shift through the lifecycle. Before we might have asked, “How quickly can we find and fix what went wrong?” That’s still a valid question. But we’re looking at asking another question much earlier: “Where are risks entering our development process, and what can we change in our designs, tools, templates, dependencies, and reviews so fewer of them reach code in the first place?”

Prevention starts before code

Coding always starts with a vision of the result desired. This process sparks a design stage, where designers and coders (sometimes the same person or people) work out how to approach the coding process. It’s here, before the first line of code is written, that vulnerabilities start to manifest.

Also: What is antivirus software and do you still need it in 2026?

That situation arises because design decisions impact implementation. While working through the design, consider these factors carefully:

  • Trust boundaries: Weakly defined boundaries between users, services, networks, or systems can mean that one compromised area affects parts of the application that should have been isolated.
  • Identity: If the system doesn’t reliably know who or what is making a request, every downstream security decision becomes questionable.
  • Authorization: If the architecture does not consistently enforce what each user or service is allowed to do, attackers may gain access to actions or data they should not have.
  • Data exposure: If sensitive data flows through too many systems, logs, APIs, or client-side components, it becomes easier to leak or misuse.
  • Logging: If logging is missing, excessive, or poorly designed, teams may either miss attacks or accidentally store sensitive information where it does not belong.
  • Failure modes: If the system fails while data is open, leaks details during errors, or behaves unpredictably under stress, outages and attacks can turn into security incidents.

We’ve all heard the phrase, “What could possibly go wrong?” It’s usually said after some audacious and potentially unwise plan is proposed. 

But if you turn that phrase around and ask “What could possibly go wrong?” with serious intent, you can start to do threat modeling on your software. Other questions you can ask before committing to a design include: Who will use this system? What data will it touch? What services will it trust? What nefarious behaviors could an attacker try? What would happen if one part failed or was compromised?

Also: 10 ways AI can inflict unprecedented damage in 2026

Thinking through design decisions early, with threat and security issues top of mind, can help you catch risky assumptions early, while the design is still flexible. Then your team can make safer choices before those choices become expensive code, production dependencies, or customer-facing weaknesses.

Before you start coding, think about what “safe enough” means. Pre-planning security considerations means factoring authentication, authorization, encryption, auditability, data retention, abuse cases, and recovery behavior into your design from the beginning.

Also: Nearly half of cybersecurity pros want to quit – here’s why

CISA (Cybersecurity and Infrastructure Security Agency) is America’s primary cyberdefense agency. CISA is promoting a Secure by Design strategy, in which vendors build cybersecurity into the design and manufacture of technology products.

According to CISA, “Products designed with Secure by Design principles prioritize the security of customers as a core business requirement, rather than merely treating it as a technical feature.”

If you’re interested in this approach (and you really should be), I recommend reading CISA’s detailed document on the strategy.

Prevention continues inside the developer workflow

I remember the day, decades back, when editors morphed into interactive development environments (IDEs) and became true helpers. The feature was the symbolic debugger, which allowed you to trace code flows, inspect variables, and install breakpoints. IDEs instantly improved my code quality because I could monitor every variable continually, and see what was changing and when.

Since then, IDEs have improved continuously. At some point, developers added features to monitor your code as you write it, flagging errors as you type. For you non-programmers, this feature is like when the spellchecker in your word processor shows those squiggly lines under words, but for entire sections of code.

Also: These 4 critical AI vulnerabilities are being exploited faster than defenders can respond

Despite the hype around vibe coding, humans will continue to write code. Maybe not all of it, and maybe not all coders, but there will still be experienced developers who create code line by line. For those developers, secure-at-the-source means that the IDE should be able to flag security issues as much as syntax issues, while the code is being written.

Other secure-at-the-source additions to the developer workflow include checks in pull requests before merging, dependency alerts in repositories, secrets detection before commits become incidents, automated tests in CI/CD pipelines, safer package guidance when choosing libraries, issue tracking that connects findings to real work, and deployment checks that prevent risky changes from reaching production unnoticed.

Just this year, Amazon (a firm that should clearly know better) pushed a code change that blocked customers from checking out, looking at products, and accessing their accounts. As much as some of us would prefer this to happen more often to keep us from sending Bezos all our bucks, the fact is that a mere deployment error cost Amazon millions. That pricey oopsie showcases the cost of not catching errors and vulnerabilities before you ship.

Somewhere in the deployment process, Amazon didn’t use preventative security measures. Its guardrails (assuming it had them) didn’t work.

As part of the development process, programmers and programming teams can help secure their output by starting with established secure coding patterns. Using approved frameworks, reusable authentication and authorization libraries, safe defaults, secure templates, and platform services provides a standardized basis for code where developers don’t have to choose implementation patterns for every module.

Also: 10 ways AI can inflict unprecedented damage in 2026

The National Institute of Standards and Technology (NIST), a non-regulatory US federal agency within the Department of Commerce, has suggested a framework for “mitigating the risk of software vulnerabilities.” NIST SP 800-218 proposes software development lifecycle best practices that can reduce vulnerabilities. Some of these practices include:

  • Prepare the organization: Define roles, standards, training, and secure workflows.
  • Define security requirements: Make security expectations explicit before development.
  • Use secure defaults: Reduce risky choices that developers must make manually.
  • Secure development environments: Protect tools, repositories, pipelines, and credentials.
  • Review source code: Catch design and implementation weaknesses early.
  • Test executable code: Use dynamic testing, fuzzing, and runtime checks.
  • Protect software integrity: Verify artifacts, provenance, and release authenticity.
  • Analyze vulnerabilities: Understand root causes, not just individual bugs.

The NIST guidelines also recommend tracking, evaluating, and updating dependencies. We’ll talk about this in-depth next.

Managing supply chain risk

Over the past few years, we’ve all become intimately familiar with what happens when a supply chain becomes interrupted. We all remember The Great Toilet Paper Shortage of 2020, for example. Supply chain is a term that describes how something, such as toilet paper, moves from raw materials to manufacturing, then to shipping, and finally to distribution and consumption.

Software development also has a supply chain, although our term of art is “dependencies.” Nobody writes all the code in a product or service. Instead, most of what happens is built of software building blocks written by other companies or open-source developers. Those building blocks are, themselves, often composed of other building blocks, modules that do almost everything that happens behind the scenes.

Also: AI is quietly poisoning itself and pushing models toward collapse – but there’s a cure

The problem is that these building blocks, in the form of open-source libraries, containers, APIs, build tools, SaaS components, and AI-generated code, can all introduce vulnerabilities and flaws in the final solution.

Sometimes, malicious actors will submit changes to open-source tools that core developers miss. Other times, simple coding mistakes can lead to vulnerabilities. The thing is, these dependencies are black boxes to most developers. Worse, they’re moving targets. As they get updated, those updates are included in production software. This step means a dependency that was once perfectly safe can be compromised in a later update.

Think about it this way. While your code might have vulnerabilities, unless it’s widely used, those vulnerabilities might take some time for threatening players to discover. But those dependencies? Those vulnerabilities are widely known, often sold on illicit marketplaces. The easiest way for your software to become vulnerable is to rely on vulnerable software.

Also: 5 security tactics your business can’t get wrong in the age of AI – and why they’re critical

All of this interaction means that there needs to be a strong push for dependency hygiene. As part of your integration and approval process, make sure you choose verifiably maintained packages, lock in known versions, review transitive dependencies, monitor known vulnerabilities, and avoid libraries with weak maintenance, suspicious ownership changes, or poor security signals.

If this means swapping out dependencies or choosing different suppliers, the benefits very much outweigh any supply chain switching costs.

Reducing reactive security

Responding to a security or software emergency sucks. You can feel your pulse rate skyrocket when, two sips into your first cup of coffee, an email or notification describes how everything has just blown up. It’s even worse when this issue happens in the middle of the night.

Designing and delivering software built to be secure can reduce those stress bumps. This approach can also reduce your organization’s overall liability, reduce bad press, and increase customer confidence.

Implementing a design change before release will undoubtedly be cheaper and less painful than production incidents, customer notifications, urgent hotfixes, or compensating-control workarounds.

This shift is a cultural change. Secure-at-the-source makes development quality a core practice in design and coding. Security has to be part of how software is written. Don’t wait until after everything is coded and built to find out what needs to be recoded and rebuilt. And definitely, if at all possible, don’t wait until you have angry customers screaming at you (ask me how I know) when something they rely upon breaks down horribly.

Your stomach acid (or lack thereof) will thank you.

Would your developers welcome security guardrails in their daily workflow or see them as another layer of friction? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Web izvor

By Tomšić Damjan

Pozdrav, ja sam Damjan Tomšić, osnivatelj i urednik informatičko edukativnog bloga Oblak Znanja. Za Vas ću se potruditi da dobijete edukativne članke, savjete i recenzije vezane uz osnovno i napredno korištenje računala i interneta. Kontak: Google+, Gmail.