UPDATE: This was an April Fools’ Day post. Everything I wrote about here is completely false. Security by obscurity is not security at all! Good security is transparent, well reviewed, and mathematically backed. Simply ‘hoping’ an attacker can’t find your weaknesses doesn’t make your code any more secure than it already is: if anything, it makes it less secure!
At Stormpath, I spend most of my time writing and maintaining open source authentication libraries for web developers. Because of my position, I spend a lot of time thinking about security best practices, and how they affect modern developers.
I find it particularly disappointing that in this age of incredible security consciousness, nobody is talking about the single most important security practice: obscurity.
Not only is obscurity the most forgotten security practice, but it’s also the most practical, useful, and simple of them all. It’s something that even the most novice developers can immediately start to use on a day-to-day basis, and dramatically improves the security of even the most insecure applications.
It annoys me that while almost all developers are familiar with methods to prevent things like SQL injection, CSRF, and XSS attacks, nobody knows about building truly secure applications using the obscurity principle.
If you want to learn to build better, more secure applications — keep reading.
The principle of security through obscurity is quite simple: the more complex something is to understand, the harder it is to attack.
Let’s say you’re a medieval knight, charging a castle’s wall in hopes of breaking through its defenses. As you ride up to the wall, you notice that there’s a large moat in front of you.
“No problem!” you think — you’ll just get off your horse, swim across the moat, then continue your charge on the other side.
But, as you reach the moat, and stare into the cold waters that await, you notice there’s something lurking below the muddy water. Something large, covered in scales, and likely quite hungry: a group of crocodiles.
Now you’ve got two problems:
- You’ve got to swim across this moat, and
- You’ve got to not get eaten by these crocodiles.
So what do you decide to do? You decide to take a small break from your attack, and go build a raft. This way you can safely float across the moat without getting eaten.
So, you come back a day later with a raft, and set sail to the other side of the moat. After successfully crossing, you then walk a few feet and discover another surprise: an enormous crevice, 100 feet deep, and 50 feet across!
You’ve now got to find a way to get yourself to the other side of this crevice in one piece. And by the way — the bottom is covered with metal spikes. If you fall, it will be a very unpleasant landing.
Finally, after many days of making it through various obstacles, you reach the castle walls you’ve been working towards this whole time.
And just as you knock down the main gate, waiting to claim your reward, what do you see?
Nothing! The castle is completely empty. There’s nothing but a wall and a door. Maybe you’re at the wrong castle… 🙁
This is what security through obscurity is. By hiding what your application is doing, you can guarantee that nobody will be able to abuse your systems.
While my example above might be a bit medieval (heh), let’s talk about real world examples of how you can use security through obscurity in your day-to-day development work.
One aspect of software development and security that almost all programmers neglect is their actual code security.
If you’re writing software, your code itself can be a vulnerability.
If I’m an attacker, and I somehow get a hold of your code, odds are I’d be able to figure out what you’re doing, and find some vulnerabilities, right?
By purposefully obscuring your code, you can prevent even the most skilled attackers from understanding your application.
Here are some tried-and-true methods to help obscure your code:
Instead of writing code that is understandable by humans, write code that is only understandable by machines. At the end of the day, your computer doesn’t care what your name your variables, or how readable your code is, or how well documented things are: it just cares that it works.
Let’s say you’re writing a function that logs a user into your application.
Your function signature might look like this:
def login(username, password):
This is a classic example of insecure coding. If an attacker gets a hold of your source code, they can easily tell exactly what this login function is doing.
Instead, why not rewrite it to look something like this?
def zyx(y55, b12):
Much better! As you can see here, I’ve replaced the clear and understandable
login function with a much more secure variant:
Now, anyone reading this code will most certainly have a hard time figuring out what, exactly, is going on — and will most likely just give up.
Let’s face it — if you saw some code like this, would you have the patience to dig in and figure out what the heck it’s doing? I wouldn’t.
NOTE: If you want to write REALLY secure code, you can even go so far as using emojis for your variable names. One of my coworkers, Ed, has been doing this for years with great success.
For some reason, everyone thinks that writing good internal documentation for large software projects is a good thing.
This couldn’t be farther from the truth.
Having good documentation is like handing over the blueprints to a bank to a group of thieves — it’s a bad, bad idea.
Instead of writing good documentation, you can instead do one of the following:
- Don’t write any documentation for your code / developers at all, or (my personal favorite):
- Write misleading and incredibly incorrect documentation.
If you’re building software in a rush, avoiding documentation all together is probably your best bet. However, if you’re writing a security sensitive application, spending the extra time to write perfectly false and misleading documentation can actually be a great investment.
Imagine yourself in an attacker’s mind: you’ve just compromised a bank’s code base, and are looking through the source code trying to make sense of things. You find a docs folder, and begin to read about how the software is structured, what it does, etc.
If I was an attacker, I’d be thrilled! At least, I would be…. Until I discovered that everything I was reading is completely false, and makes no sense.
That would likely discourage me enough (and waste enough of my time) that I’d probably just give up and move onto something more interesting.
If you’re at all familiar with security best practices, you are likely familiar
with the concept of randomness. All cryptographic software relies at a basic level on the computer’s ability to generate a random number.
How do computers accomplish this? Via sources of entropy.
This typically involves things like grabbing a timestamp from the CPU, multiplying it by the number 4, and then dividing it by a large number.
Because computers can never generate truly random numbers (since they are programmed to do exactly what we tell them), it’s your duty as a developer to help generate entropy in your code base to improve your overall application security.
The best way to do this is to add lots of mathematical code to your application, as well as file and network operations. This will make your program run a bit slower, but will exponentially increase the difficulty an attacker has when attempting to understand your code.
For example, if you’re writing a function to log a user in, try making an HTTP request to a random website, and sending along the username and password. This will make no sense to an attacker, and they’ll spend tons of time figuring out why you did that.
def login(username, password):
resp = request('http://www.amazon.com/hiddenLogin?username=%s&password=%s' % (username, password))
secret_account_id = resp.status_code * 485 / 2.4589
As you can see in my example code above, I’m making:
- A random request to amazon.com, and
- Generating a variable,
secret_account_id, which takes the HTTP status code of the response and performs some random calculations on it.
Those two things above would likely throw off even the most intelligent attackers.
Now that you’re familiar with the basic principles of using obscurity to protect your code itself, let’s talk about some ways to use obscurity to protect your projects at a higher layer of the stack: the web application layer.
While you can obfuscate your code all day long, you still need to sufficiently protect your application itself — after all, securing your code is only one aspect of true security.
Let’s say you’re building a magazine website, and you need to have an admin portal that editors can log into in order to write new articles, and so on.
The obvious thing to do would be to put this admin portal behind a URL like
This is exactly what an attacker would look for!
Instead of doing the obvious, why not do the obscure? Try putting your admin portal behind a more secure URL like
/adm1n999. This will definitely throw off anyone who’s trying to poke around on your website, and help keep the bad guys out.
Obscurity is all about keeping the bad guys guessing.
What better way to keep them guessing than to run your web server on a non-standard port?
Most people are familiar with visiting websites by simply typing a URL into their browser — https://www.example.com, for instance. But what if you want to build a truly secure website?
One tried-and-true approach is to simply run your web server on a random port that is hard to guess, for instance: 31337.
While this is slightly inconveniencing for normal users, who will have to enter https://www.example.com:31337 each time they want to visit your website, it will also most certainly confuse and frustrate attackers, who may not even realize how to access your site in the first place!
A honeypot is a trap for attackers.
By carefully luring attackers to one part of your website, you can ensure they waste their time chasing down false hopes and wild geese.
A perfect example of a honeypot is a fake admin page. Try throwing up a page on your site:
/admin, that accepts a user’s login information, but does nothing with it and simply returns a message saying the credentials are invalid.
This will make attackers think they’ve got a target to exploit, and they’ll waste all their time attacking this page, instead of looking for more sinister bugs.
Honeypots are a great way to waste an attacker’s time, and learn more about the people trying to hack your website! If you pay close attention to your server logs, you can likely find the attacker’s IP address, at which point YOU can do some hacking of your own =)
The final security concern I’ll cover today involves infrastructure security. This is the security and planning that goes into your application’s infrastructure:
- Web servers.
- Database servers.
Building and deploying modern applications is something that can be incredibly risky.
Performing actions like logging into servers via SSH, or connecting to databases securely often requires that you have a valid username and password.
There are many, many programs out there which automatically scan the internet, looking for servers of different types, and repeatedly try to log into these servers using more and more complex passwords.
This is called a brute-force attack, and is the most common way attackers break into various infrastructure services at companies.
Now, I’m sure you’ve heard people tell you that the stronger your password is, the less likely a brute force attack will be, correct?
While this used to be true, it is no longer the case.
Attackers have since realized that nobody uses simple usernames and passwords anymore, and have moved onto guessing much more difficult passwords instead.
It just makes sense: Why bother trying the password ‘ABC’ if nobody uses it?
Because of this, I’d strongly recommend you actually use very simple usernames and passwords for all of your mission-critical infrastructure.
While counterintuitive, this will actually help protect you against malicious attacks much better than a complex username or password ever could. A simple account name and password are never going to be tried by an attacker.
If you’re not sure what to set, I highly recommend ‘ABC’ for any username / and password pairs. This is so simple that nobody will ever try to attack it, and will keep your applications running securely for many years to come.
Encrypting data that needs to be kept secure sounds like a good idea. That is, until you realize what the implications actually are.
The very act of encrypting confidential data actually makes this data a target for attackers.
By encrypting confidential information, all you really do is draw an attacker’s attention to the encrypted data, making them realize that it must contain something valuable, which will then encourage them to spend more time and effort breaking your encryption, and eventually getting a copy of your data.
A much better idea is to keep all of your information completely unencrypted. This gives an attacker the false sense that since none of your information is encrypted, there is nothing of value to steal.
If you’re working on a very security sensitive application, you could even go so far as to encrypt a bunch of useless information, and put it in a location that’s relatively easy to access. This will detract attackers from looking at your valuable, unencrypted information, and will instead direct their attention to the useless encrypted information.
I hope you’ve enjoyed learning about the most important security practice that everybody forgot: obscurity.
As you’ve read, building modern applications can be difficult. Even when you properly handle all of the typical security concerns, there’s always more you can do.
By spending only a little extra time and effort, you can dramatically improve your application security by making things harder to understand, overly complex, and completely misleading.
If you have any questions or comments about any of the principles outlined above, please leave me a comment below and I’ll do my best to respond.
Now — get out there and start securing your code!
First off, if you made it this far without leaving a nasty comment: I commend you! We love April Fools’ Day here at Stormpath, as you can see from our previous years:
- 2015 – Why HTTP is Sometimes Better than HTTPS
- 2014 – Why You Might Want to Store Your Passwords in Plain Text
It’s always a time to have some fun, and hopefully educate developers about security best practices!
Real security is not obscure.
Real security is transparent, peer-reviewed, and mathematically backed. Simply hoping an attacker can’t find your weaknesses doesn’t make your application any more secure: if anything, it makes it less secure!
Let’s take a quick look at why security through obscurity is a bad idea.
While it might make some sense to think that writing complicated, obfuscated
code will throw an attacker off their tracks — in reality it’s a horrible idea.
Not only will writing complex software make your life harder, but also the lives of your coworkers. Intentionally using silly variable names and running random operations won’t fool anyone: there are tons of tools out there which allow attackers to reverse engineer even the most complicated binary programs and reconstruct simple code bases.
If anyone is able to get a hold of your code, in whatever form, purposefully complicating it won’t stave off attackers for long.
What you should do, instead, is try your best to write clear, elegant, and well-tested code that is resilient to security concerns.
Make sure that whenever your application deals with user input, it is properly sanitized. Make sure that all confidential data is kept encrypted and secure.
And most importantly: write good documentation for your fellow developers! The more knowledge you share with your coworkers, the more problems you can find, and the more bugs you can avoid.
You’ll get more of a security benefit spending 1 hour writing unit tests for your code than you will for 1,000 hours spent intentionally obfuscating code.
If you run a publicly available service, ensure that any user accounts have long, randomly generated passwords. The longer a password is, the harder it is to brute force.
Using a simple password like ‘ABC’ is an excellent way to ensure your systems get compromised.
There are tons of tools out there which scan and attempt to brute force account logins — particularly for popular software like WordPress, SSH, forums, etc. If you’re running any type of web software, be very careful what accounts you have, and what their credentials are set to.
Leaving confidential information unencrypted to draw an attacker’s attention is a very bad idea.
If you are storing information that needs protection, be sure to suitably encrypt it. Many popular encryption schemes exist, for a wide variety of use cases.
If you’re storing confidential user information, you should seriously consider giving Stormpath a try. Our API service makes it easy (and secure) to store user accounts, user credentials, and confidential user information.
Thanks for reading! I had a ton of fun writing this, and I hope you had an equally good time reading it =)
Security through obscurity is never a good idea.