With the exception of vector #6, this is pretty much a list of implementation flaws, not protocol flaws. It's as if he's trying to say that the protocol isn't written defensively enough or foolproof enough for implementors, and he might have a good point if he simply said that, but he never did.
Even with an OAuth library for the nuts and bolts, to implement an OAuth2 authorization server requires grokking the spec.
Vector #6 (Phishing by spoofed client) has always been particularly interesting to me, and it applies to both OAuth2 and OAuth1. I dont see it as a protocol flaw but an attack vector service providers need to be more aware of.
While OAuth2 was in draft, I reported it to Facebook, Google, Twitter and the OAuth WG. Facebook addressed it by displaying the domain name of the client on their authorization dialog, but they eventually got rid of that. Google also acknowledged the vulnerability and said they would rely on policing registered clients to catch it. Twitter never responded. The WG asked me to propose text for the security considerations document, but I dropped the ball (too swamped) and couldn't do it in time.
> this is pretty much a list of implementation flaws, not protocol flaws
I can agree with that but problems outlined are very common so it doesn't really matter. Say, Facebook is major oauth2 provider and doesn't follow most of the spec. As well as lots of popular libraries.
You're probably right; spec conformance is a problem. What should be done about it? If the spec itself isn't all that bad, then we should try to improve implementations. Maybe tools can be developed to help detect implementation flaws. Or if the letter of the spec is the problem, but the protocol/framework itself is good, then why not rewrite an alternative spec, similar to what was done recently for HTTP/1.1?
The spec should be robust against the possibility of implementation errors that result in the system being less secure. This is distinct from the possibility of implementation errors that simply result in the code not working at all or being obviously broken. You can't prevent those, but you also don't need to. The point is that subtle implementation errors should be ruled out as much as possible.
This is a common criterion in cryptography, against which systems and primitives are judged.
I have written an implementation for OAuth1 and OAuth2.
I liked the protocol/framework so little that I started implementing a new one. But soon I realized that a lot of the difficulties come from having this thing run on top of HTTP, and that I could not access any security feature of the lower levels.
So (IMHO) everything implemented on top of HTTP, or the whole idea of having isolated layers of security is doomed to have problems and will cause headaches to anyone working with it, aside from requiring developers to be security experts.
But I still didn't stop and my master thesis now is a complete secure rewrite of protocols from tcp, tls to OAuth.
The project is on fenrirproject.org if you want to comment it. Lots of work, I am aiming to an implementation in half a year.
Please feel free to drop me a line.
I've written an OAuth2 server implementation and to be honest, I still don't really understand why people use OAuth2 instead of OAuth1.0a. The minimum amount of work you have to do to write an implementation that complies with the spec leaves you open to all sorts of security issues (as Homakov continues to detail), no two providers implement the same parts of the spec so writing clients isn't really all that easy, and the "security" features don't really seem all that secure. For example, refresh tokens, what the fuck? If an attacker can steal your access token why would you assume that your refresh tokens are safe? How is it ok to say "well, even if your access token is stolen at least it will expire eventually, so don't worry"?
It requires the client id and secret to obtain a refresh token, some also choose to skip this and ask the user to re-authenticate completely.
I agree that people implement it differently so vastly different that it takes almost 40 sources to compile a decent down-to-earth explanation of common practices (same for OAuth 1.0a). However, that doesn't make the protocol bad, it makes the implementation bad, you can avoid XSS, OOS Origin Attacks, and the others with the exception of Vector attacks, but both are vulnerable to this.
That's fair, I guess what I'm trying to say is that if the spec leads to broken/vulnerable implementations in the majority of cases there might be reason for concern. One thing I think OAuth 2 gets right is the entire concept of scoped authorization; although not unique to OAuth2, it's now familiar to users largely because Facebook and Google adopted it through OAuth 2.
As someone who was present at Google working at OAuth at the time OAuth 2.0 was negotiated: the way to interpret refresh tokens is in the context of a large organization like Google or Facebook, not a small website. A refresh token, which is powerful, would only be presented to a single endpoint which could had different logging and security considerations.
But yes, a lot of damage can be done in an access token timeout.
The point of refresh tokens is not to be more secure than access tokens, but to make some implementations more convenient.
- It's easier to change the format of short-lived access tokens, since you know there are no valid tokens hanging around after the expiry time. In contrast you may want refresh tokens to be valid for months or years.
- Every endpoint in your system must read access tokens, but only your authorization endpoint needs to read refresh tokens.
- In some cases it is acceptable to do checks only when verifying refresh tokens, e.g. checking for revocation only when refreshing the tokens, while access tokens are trusted implicitly while valid.
For a simple implementation you can just issue long-lived access tokens, use of refresh tokens is optional.
I had to write one too, and feel exactly the same way.
The common argument for OAuth2 seems to be, "Well, Google and Facebook are doing it, so it must be worth something." Of course Google and Facebook are doing it; it lets them play the role of the official identity keepers of the internet.
Those companies are known to pick the best and the brightest engineers, yet exploits were found in even their versions of OAuth2. If they couldn't produce a secure implementation, then can anyone?
This article is just technical enough for me. As a developer implementing security can be a pain in the ass. All I wan't is someone to tell if a protocol is secure, and get done with it.
I don't know about that — it helps to have more than superficial knowledge of the topic at hand so that you can adequately assess rants such as this one.
Personally, while I feel that Igor Homakov has done good work, this article is the product of frustration and is a disservice to its audience. Most if not all of his criticisms of OAuth 2 come down to implementation problems, and a more positive contribution would be an implementers' guide or a threat model document. For example, https://tools.ietf.org/html/rfc6819 and http://leastprivilege.com/2013/03/15/common-oauth2-vulnerabi....
That's the kind of attitude that leads to massive security flaws. Understanding security is more than just "yes or no". You must understand the concepts. If you don't, stop professionally writing software, because you're doing something irresponsible that will do real harm to real people.
Agreed, but there is a difference between understanding, let's say, how PKI works and how it applies to HTTPS vs knowing how to implement AES. Sometimes, we just need to know that SSL 3 is broken, and don't need to know the exact details. All we then need to know is to stop using it.
My comment was more generalized, as it was responding to a general comment.
However, I will say that HTTPS is not too complicated to understand and that it's not a magic bullet.
You still need to understand, for example, how a certificate can be compromised and what the pros/cons are of different implementations. It's not a simple "yes or no", even though it's close.
It sounds like maybe you don't understand the concepts.
Of course you're right that most people don't need to know exactly how encryption algorithms work. But, everybody needs to know what they do -- and what they don't do! That's a deeper level understanding than simply knowing if they're "secure" or not.
For example, too many people think that encryption gives you security. It does not. Encryption can provide confidentiality, but only if you also have integrity and authentication. Those three things are just the beginning of security.
One of the implications is that if you're using a self-signed certificate for HTTPS, you might as well not bother encrypting. If you don't reject a certificate lacking a verified signature, then you can't know that you aren't talking to a MITM instead of the server you think you're accessing. A MITM can trivially decrypt all your data, so why bother encrypting in the first place if you don't verify certs? Too many people ignore the certificates because they don't understand what encryption really gets them.
Many people also discount that danger because they don't understand how trivially easy MITM attacks can be. ARP spoofing is not hard. Some networking equipment is getting better at preventing it, but you can't always count on it. In short, it's best to assume that anybody else with a laptop in your local coffee shop can see _and modify_ all network packets you send. They don't necessarily have to break the wireless encryption to see them, either, so that won't keep you safe.
"Secure or not" is not a meaningful distinction. "Secure" means a lot of different properties which may be of varying importance to different people and applies to the whole system, not individual components. So the only meaningful interpretation of 'secure' for a subcomponent is "Does it achieve the security properties it aims to?". It does not excuse from learning what those properties are, what they mean, and their implications for the security of the system as a whole.
Valid point. Even encryption is only "secure" for a limited time. I could sniff the traffic, store the data and wait until the encryption is crackable. For most transactions that is good enough as our passwords are probably not relevant in 10 years. For some transactions it may not be enough because you probably would have the same bank account in 10 years. Granted the cost to capture and save for a later date probably outweighs the potential to exploit.
At the same time I would not expect a front end JS/CSS developer to know the specifics of the entire system, only the parts of his/her subsystem. That is to say they should know XSS/CSRF like the back of their hand, but probably don't need to fully understand a stack overflow. On the other hand if you write C/C++ or any other low/mid level language XSS probably means nothing to you and stack overflow is highly important.
See my comment above. There's a lot more to protecting a site from man-in-the-middle attacks than enabling HTTPS.
The most important things people need to know are the pros/cons of different types of certificates, how to keep certificates safe, and whether they have a vulnerable SSL library installed.
To the downvoters: if you stick your head in the sand, you'll do things like using MD5 for password hashing and logging into remote servers as root. I see basic mistakes like that all the time.
I'll reiterate (and this is a general comment, not necessarily about HTTPS): if you aren't willing to understand how software works and how people attack it, don't write it professionally. It's part of your job and your responsibility to your customers and their users.
When systems are cracked, it can leak financial info, passwords, addresses, children's names, medical info, etc., etc. You may have a totally innocuous site that helps someone get into one of your user's more sensitive accounts.
Security is really important and failing to understand it can ruin people's lives. I've personally seen it happen.
It worries me that saying something as simple and unassailable as "understand the security implications of your code" got downvoted on a "hacker" site so many times.
"Understanding security is more than just "yes or no". You must understand the concepts. If you don't, stop professionally writing software, because you're doing something irresponsible that will do real harm to real people."
Which is a very direct and negative comment. Not all software significantly touches on security. People write one off programs for generating musical compositions, one off pieces of data analysis. Proof of concepts that aren't designed to ship and any number of non-internet connected programs where the security considerations are less significant.
If you didn't mean those applications, then your comment amounts to "people writing security sensitive software should be mindful of security". Which is so redundant as to be meaningless.
Telling people "you have no right to be programming" on a hacker forum is unlikely to make you many friends.
Yes, I was being direct and negative. I was responding to someone who wanted to be a developer and not have to understand security. That kind of attitude/culture is what makes so many thousands of widely-used applications vulnerable. Security shouldn't be an afterthought. People trust us to write secure software, and few of us do.
The key word in my comment was "professionally". I'm not telling someone experimenting for fun to learn detailed security implications. I'm talking to someone who is charging someone (clients or employers) for their work.
And what I said is, sadly, not so redundant as to be meaningless because I was responding to someone who said "I don't want to be mindful of security, just tell me if [XYZ] works." So obviously it DID need to be said!
If only people looked into it more than 5 minutes and realized that Persona is still maintained and is by far the best way to do authentication on the web. Really, the most correct protocol out there.
But then of course, if only Mozilla had a marketing team worth a damn and didn't make it look like they gave up on the whole thing in the first place, we wouldn't have this situation.
I get sad trying to promote it. I get the feeling that whoever is in charge of decisions around Persona has no idea how important a project like it is for the web. Everybody is tying a core and extremely security-sensitive part of their websites, authentication, to other websites in a non-decentralized way. And every time there's a damn post about "Facebook auth is down!", "Twitter auth is down!"... how long is it going to be until those are down for good and people just can't log in anymore?
I agree with you. I looked into Persona to implement an SSO for our products (provider and client). I've never been able to understand OAuth in the context of an authentication mechanism so I left it.
However, Persona has very poor library support, especially for providers. The support channels are also very small, so you're not likely to find other people fixing the same issues.
I ended up just hacking an OpenID provider libary to get the result I wanted. It's a real shame because Persona seemed to be designed for circumstances very similar to what I wanted to do.
I strongly believe in the protocol, but I have stopped believing in Mozilla to actually do something with it. They have absolute technical gold (yes, it has a couple of issues, they are minor overall) and they just aren't doing anything with it.
I still encourage people to actually implement it because, unlike with centralized protocols, we don't actually need Mozilla for it to work (persona servers are open source and the whole protocol is decentralized). But I have very little faith in ever getting the critical mass necessary for a majority of devs to adopt it without Mozilla promoting it more. (And they were so close, too, with their gmail gateway...)
My understanding from my research into OAUTH2 is that most of the vulnerabilities in it are only issues in a naive implementation. They can be made secure, but it's not easy and you have to know to do it in the first place.
Doesn't OpenID Connect address those issues? I know that's what Google is using now.
Even with an OAuth library for the nuts and bolts, to implement an OAuth2 authorization server requires grokking the spec.
Vector #6 (Phishing by spoofed client) has always been particularly interesting to me, and it applies to both OAuth2 and OAuth1. I dont see it as a protocol flaw but an attack vector service providers need to be more aware of.
While OAuth2 was in draft, I reported it to Facebook, Google, Twitter and the OAuth WG. Facebook addressed it by displaying the domain name of the client on their authorization dialog, but they eventually got rid of that. Google also acknowledged the vulnerability and said they would rely on policing registered clients to catch it. Twitter never responded. The WG asked me to propose text for the security considerations document, but I dropped the ball (too swamped) and couldn't do it in time.