Login Register

I Locked Myself Out of AWS SSH with iptables -F: A Cautionary Tale and Recovery Steps

| By: danduran | Category: Development

Overview

In my decade of managing AWS EC2 instances, I still managed to find a way to make a classic "newbie" mistake. In a rush to get things done, I ran the iptables -F command (then saved after creating new rules!) without fully considering the consequences. This post will walk you through how I regained access and share some thoughts on the critical nature of careful firewall management, even for seasoned pros.

The Misstep

One day, while I was managing network traffic rules via iptables, I decided to flush all existing rules to start configuring anew. Here's the command that started it all:

/bin/bash
iptables -F

Normally, if you don't save these changes, they only stay in memory. Rebooting the server would then restore the previous rules from disk, giving you back SSH access. However, in my excitement, I not only flushed the rules but also went on to create new rules and saved them immediately. This action persisted the changes and permanently locked me out of SSH access.

Solution: How to Regain Access

Here's a step-by-step guide I followed to recover from this oversight:

Step 1: Stop Your Instance

First, I stopped the EC2 instance from the AWS dashboard. This was necessary to modify the instance settings safely.

Step 2: Modify User Data

I then navigated to Instance Settings -> View/Change User Data and injected a script that would modify my iptables upon the next boot. Here's what I pasted:

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
/bin/bash
csf -x
iptables -L
iptables -F
--//

This script disables CSF (as I was using CSF for my firewall), lists the current iptables rules to check, and flushes them again. If you use UFW, you might need to adjust the script accordingly.

Step 3: Start Your Instance

After updating the user data, I started my instance again. The script ran at boot, adjusted my iptables, and thankfully restored my SSH access.

Step 4: Verify and Adjust

Once I regained access, I immediately verified the iptables rules and made necessary adjustments to ensure the server was both accessible and secure.

Conclusion

This experience was a powerful reminder of the importance of careful rule management with iptables. I learned to always back up my current rules before making significant changes, and to consider all implications of saving new rules, especially on a remote server where physical access isn't an option.

User Discussion (243 Users)

Join our Discord community!