AWS waste pattern

Review stopped EC2 instances before they quietly keep your AWS bill warm

Stopped EC2s look harmless, but storage and dependencies keep costs alive long after compute is paused.

Most tools stop at visibility. OpsCurb treats findings like work: identify the resource, assign the owner, and keep follow-through visible until it closes.

Common after incident and patch cyclesStorage and snapshots keep recurring costOften high-velocity weekly queue item

Tiered AWS access

Start with the Core Scan Role, add optional capabilities later, and review the public permission mapping before you connect.

Priority context

Frame the issue in monthly and annual impact so the cleanup gets prioritized and tracked.

Owner-ready next step

Use evidence, guardrails, and handoff language instead of raw AWS screenshots alone.

What the issue is

Teams pause instances to save money, then leave them in the queue “for later” until later never comes.

The real drain is usually orphaned storage and ownership handoff, not the stopped node itself.

  • Temporary instance fleets parked for rollback
  • Dev stacks paused through release windows
  • Automation jobs that still point to old test environments

Validation steps

Before you terminate anything, confirm intent: rollback, audit retention, or true decommission.

Then check stop duration, attached volume usage, and the last deployment or incident reference.

  • Verify the instance is not part of active DR, compliance, or backup workflows
  • Check whether attached EBS, ENIs, and security groups are still meaningful
  • Add owner context in the finding before deciding next action

Risk warnings

Terminating while dependencies are unclear can remove pre-baked recovery paths.

Even if terminated, snapshots and logs may still be needed for audit replay.

  • Preserve required images and snapshots before deletion actions
  • Coordinate with on-call if the instance was part of incident tooling
  • Use stop-first, then termination review as the safer sequence for ambiguous workloads

ROI framing

Stopped instance cleanup is usually a steady monthly savings lever because teams can remove them in small batches with low risk.

The larger payoff is process: each resolved stopped instance reduces future "forgotten workbench" noise and rework.

  • Recurring savings are predictable across multiple environments
  • Batching across accounts turns cleanup into a routine operating rhythm
  • Good for teams with recurring short-lived test infrastructure

How OpsCurb helps keep it continuous

OpsCurb surfaces stale stop-state resources as part of recurring scans and keeps them in the same owner queue you already use for cleanup.

That makes it less likely they drift back into the account and remain dormant for months.

  • Recurring discovery in first-scan refreshes
  • Owner-ready summaries instead of raw instance inventories
  • Consistent follow-through with status and notes
FAQ

Questions buyers ask before they act

These are the friction points teams usually need to clear before they turn a likely savings opportunity into a real cleanup task.

Can I keep a stopped EC2 for rollback and still remove it from this queue?

Yes, if rollback windows are explicit and documented. Put a reason and revisit date in the finding notes so review is intentional.

Should every stopped instance be deleted?

No. Some are valid backups for release or rollback. The right action is validate intent, then choose stop/archive/release.

Can OpsCurb automatically clean this up?

No. OpsCurb only gives evidence and prioritization. Your team performs the final action.

Related next steps

Keep exploring this savings path

Move from research to action with a tutorial, a sample brief, a live review, or an ongoing plan.

See all plans