Ruminations of idle rants and ramblings of a code monkey

Sql Injection #1 Hacker Technique

On April 15, Verizon Business Security Solutions released The 2009 Data Breach Investigations Report, a comprehensive analysis of the data breaches that they investigated throughout 2008. A total of 285 million records were compromised as a result of these breaches and 79% (approximately 214 million records) of the compromised records were compromised utilizing SQL injection based attacks, typically through custom-developed web applications. As attackers are targeting the financial industry (representing 93% of total records) and, in particular, PIN data together with their associated credit and debit accounts, for focused attention, these records represent a far greater risk to the compromised user’s financial data and funds than magnetic stripe records or simple credit card account numbers. Do I have your attention yet? Are you thinking to yourself “Holy Cow!”? I know that I was when I first saw this … we’ve known about the potential exploits of Sql Injection for a long time now (almost 10 years) and it’s still the most successful method of choice for data breaches. And these aren’t easy or simple breaches … the breaches that are considered the most complex are responsible for 95% of the compromised records … some of these attacks were the result of months of research. Who, you may ask, would have the time, patience and resources to dedicate so much time to an attack? Of the external breaches where the IP was traced to a specific entity, 16 out of 25 were traced to known organized crime outfits We aren’t dealing with the zit-faced script kiddie here, munching pizza in the darkness of his parent’s basement. We’re talking about career criminals that will take advantage of this information. Conventional wisdom often points to insiders as the most dangerous breach, but this data does put give pause to that assumption. The largest and most damaging breaches were externally sourced, not internally. Still, one should not, and in looking at the data, cannot, dismiss the damage potential of internal breaches; while the largest individual breaches were external, the median number of records compromised from internal attacks was just over 2.5 times the number of records from external sources. As far as risk and damage potential, both sources are a high risk for compromise. “Wow!” you say “I thought that Cross Site Scripting was OWASP’s #1 threat!” Well, injection attacks are #2 and I’d bet it was a close race. That said, it’s not so simple. Cross Site Scripting (XSS) and Cross Site Request Forgery (CSRF) have been used to spread JavaScript-based worms that then use Sql Injection for an attack. Attacks vectors, it seems, don’t like to be alone and prefer to travel with their buddies. We’ve known about Sql Injection and its potential for damage for a long time now. We’ve known that this type of attack is technology and database agnostic. Yet it is still a major issue. And it’s difficult to get developers to actually listen to security talks … there is still the attitude, it seems, that security is an infrastructure problem – but it clearly is not. Are you one of the ones that care deeply about security? Or do you want to prove my previous statement wrong (I’d love to be proven wrong on that)? Why don’t you show up at the Houston OWASP group or your local OWASP group?

New Account Email Validation (Part II)

.NET Stuff | Security | Web (and ASP.NET) Stuff
In my previous post, I discussed the things to keep in mind with new account validation. Well, as promised, I've done a sample of one way to do this. Certainly step 1 to to do as much as possible without writing any code, following the KISS principle. Since I am using the CreateUserWizard Control, I set the DisableCreatedUser property to true and LoginCreatedUser to false. Easy enough. But that's not the whole story. We need to generate the actual validation key. There are a lot of ways that one can do this. Personally, I wanted, as much as possible, to not have any dependency on storing the validation code in the database anywhere. This, of course, ensures that, should our database be penetrated, the validation codes cannot be determined. With that, then, the validation code should come from data that is supplied by the user and then generated in a deterministic way on the server. Non-deterministic, of course, won't work too well. I started down (and really, almost completed) a path that took the UserName and Email, concatenated them, generating the bytes (using System.Security.Cryptography.Rfs2898DeriveBytes) to create a 32-byte salt from this. I again concatenated the UserName and email, then hashing it with SHA1. This certainly satisfied my conditions ... the values for this would come from the user and so the validation code didn't need to be stored. And it was certainly convoluted enough that a validation code would be highly difficult to guess, even by brute force. In the email to the user, I also included a helpful link that passed the validation code in the query string. Still, this code was some 28 characters in length. Truly, not an ideal scenario. And definitely complex. It was certainly fun to get the regular expression to validate this correct ... more because I'm just not all that good at regular expressions then anything else. If you are interested, the expression is ^\w{27}=$, just in case you were wondering. Thinking about this, I really didn't like the complexity. It seems that I fell into that trap that often ensnares developers: loving the idea of a complex solution. Yes, it's true ... sometime developers are absolutely drawn to create things complex solutions to what should be a simple problem because they can. I guess is a sort of intellectual ego coming out ... we seem to like to show off how smart we are. And all developers can be smitten by it. Developing software can be complex enough on its own ... there really is no good reason to add to that complexity when you don't need to. There are 3 key reasons that come to mind for this. 1) The code is harder to maintain. Digging through the convolutions of overly complicated code can make the brain hurt. I've done it and didn't like it at all. 2) The more complex the code, the more likely you are to have bugs or issues. There's more room for error and the fact that it's complicated and convoluted make it easier to introduce these errors and then miss them later. It also makes thorough testing harder, so many bugs may not be caught until it's too late. So, I wound up re-writing the validation code generation. How did I do it? It's actually very simple. First, I convert the user name, email address and create date into byte arrays. I then loop over all of the values, adding them together. Finally, I subtract the sum of the lengths of the user name, password and creation date and subtract from the previous value. This then becomes the validation code. Typically, it's a 4 digit number. This method has several things going for it. First, it sticks to the KISS principle. It is simple. There are very few lines of code in the procedure and these lines are pretty simple to follow. There are other values that can be used ... for example, the MembershipUser's ProviderKey ... when you are using the Sql Membership provider, this is a GUID. But not depending on it gives you less dependence on this. Second, it is generated from a combination of values supplied by the user and values that are kept in the database. There is nothing that indicates what is being used in the code generation ... it's just a field that happened to be there. This value is not as random as the previous, I know. It's a relatively small number and a bad guy could likely get it pretty quickly with a brute-force attack if they knew it was all numbers. To mitigate against this, one could keep track of attempted validations with the MembershipUser using the comments property, locking the account when there are too many attempts within a certain time period. No, I did not do this. Considering what I was going to use the for (yes, I am actually going to use it), the potential damage was pretty low and I felt that it was an acceptable risk. Overall, it's a pretty simple way to come up with a relatively good validation code. And it's also very user-friendly. Here's the code:public static string CreateValidationCode(System.Web.Security.MembershipUser user) { byte[] userNameBytes = System.Text.Encoding.UTF32.GetBytes(user.UserName); byte[] emailBytes = System.Text.Encoding.UTF32.GetBytes(user.Email); byte[] createDateBytes = System.Text.Encoding.UTF32.GetBytes(user.CreationDate.ToString()); int validationcode = 0; foreach (byte value in userNameBytes) { validationcode += value; } foreach (byte value in emailBytes) { validationcode += value; } foreach (byte value in createDateBytes) { validationcode += value; } validationcode -= (user.UserName.Length + user.Email.Length + user.CreationDate.ToString().Length); return validationcode.ToString(); } Architecturally, all of the code related to this is in a single class called MailValidation. Everything related to the validation codes is done in that class, so moving from the overly-complex method to my simpler method was easy as pie. All I had to do was change the internal implementation. Now that I think of it, there's no reason why it can't be done using a provider model so that different implementations are plug-able. Once the user is created, we generate the validation code. It is never stored on the server, but is sent to the user in an email. This email comes from the MailDefinition specified with the CreateUserWizard ... this little property points to a file that the wizard will automatically send to the new user. It will put the user name and password in there (with the proper formatting), but you'll need to trap the SendingMail event to modify it before it gets sent in order to put the URL and validation code in the email. //This event fires when the control sends an email to the new user. protected void CreateUserWizard1_SendingMail(object sender, MailMessageEventArgs e) { //Get the MembershipUser that we just created. MembershipUser newUser = Membership.GetUser(CreateUserWizard1.UserName); //Create the validation code string validationCode = MailValidation.CreateValidationCode(newUser); //And build the url for the validation page. UriBuilder builder = new UriBuilder("http", Request.Url.DnsSafeHost, Request.Url.Port, Page.ResolveUrl("ValidateLogin.aspx"), "C=" + validationCode); //Add the values to the mail message. e.Message.Body = e.Message.Body.Replace("<%validationurl%>", builder.Uri.ToString()); e.Message.Body = e.Message.Body.Replace("<%validationcode%>", validationCode); } One thing that I want to point out here ... I'm using the UriBuilder class to create the link back tot he validation page. Why don't I just take the full URL of the page and replace "CreateAccount.aspx" with the new page? Well, I would be concerned about canonicalization issue. I'm not saying that there would be any, but it's better to be safe. The UriBuilder will give us a good, clean url. The port is added in there so that it works even if it's running under the VS development web server, which puts the site on random ports. I do see a lot of developers using things like String.Replace() and parsing to get urls in these kinds of scenarios. I really wish they wouldn't. Things do get a little more complicated, however, when actually validating the code. There is a separate form, of course, that does this. Basically, it collects the data from the user, regenerated the validation key and then compares them. It also checks the user's password by calling Membership.ValidateUser. If either of these fails, the user is not validated. Seems simple, right? Well, there is a monkey wrench in here. If the MembershipUser's IsValidated property is false, ValidateUser will always fail. So we can't fully validate the user until they are validated. But ... we need the password to validate their user account. See the problem? If I just check the validation code and the password is incorrect, you shouldn't be able to validate. What I had to wind up doing was this: once the validation code was validated, I had to then set IsApproved to true. Then I'd called ValidateUser. If this failed, I'd then set it back. protected void Validate_Click(object sender, EventArgs e) { //Get the membership user. MembershipUser user = Membership.GetUser(UserName.Text); bool validatedUser = false; if (user != null) { if (MailValidation.CheckValidationCode(user, ValidationCode.Text)) { //Have to set the user to approved to validate the password user.IsApproved = true; Membership.UpdateUser(user); if (Membership.ValidateUser(UserName.Text, Password.Text)) { validatedUser = true; } } } //Set the validity for the user. SetUserValidity(user, validatedUser); } You do see, of course, where I had to Approve the user and then check. Not ideal, not what I wanted, but there was really no other way to to it. There are a couple of things, however, that I want to point out. Note that I do the actual, final work at the very end of the function. Nowhere am I called that SetUserValidity method until the end after I've explored all of code branches necessary. Again, I've seen developers embed this stuff directly in the If blocks. Ewww. And that makes it a lot harder if someone needs to alter the process later. Note that I also initialize the validatedUser variable to false. Assume the failure. Only when I know it's gone through all of the tests and is good do I set that validatedUser flag to true. It both helps keep the code simpler and ensure that if something was missed, it would fail. Well, that's it for now. You can download the code at

New Account Email Validation (Part I)

.NET Stuff | Security | Web (and ASP.NET) Stuff
We’ve all seen it … when you sign up for a new account, your account isn’t active until you validate it from an email sent to the registered email address. This allows sites with public registration to ensure a couple of things. First, that the email provided by the user actually does exist (and they didn’t have a typo). Second, it also validates that the person signing up has access to that email address. Now, let’s be clear, it doesn’t necessarily ensure that the user signing up is the legitimate owner of the email address … there isn’t much more that we can do to actually validate that as we don’t control the email system that they use … but, in a realistic world, that’s the best we can do. Now, there was a little exchange recently on the ASP.NET forums that I had with someone asking how to do this very thing with ASP.NET’s membership system. Of course, this is perfectly possible to do. Now, I do believe in the KISS (that’s Keep It Simple Stupid) principle, so I look at this from a perspective of using as much built-in functionality as possible to accomplish it. So, for example, I’d really prefer not to have any additional database dependencies such as new tables, etc., to support this new functionality (that isn’t there out-of-the-box) as possible. First things first … when the new account is created, the account (represented by the MembershipUser class) should have the IsApproved property set to false. This will prevent any logins until such time as the flag is changed. There are two ways to do this, depending on how you the user signs up. If you are using the built-in CreateUserWizard, you can set the DisableCreatedUser property to true.You can also do it if you are calling the API directly from a custom WebForm (or other method). This is accomplished by calling the CreateUser method on the Membership class. There are two overloads that will allow you to do this; both of them take a boolean IsApproved argument. Again, if this is false, the user won’t be allowed to log in until they are approved. Of course, in extranet-type scenarios with some user self-service, this can be used to validate that the newly registered extranet user is valid via a manual review process. And in those types of cases, because of the very nature of extranets, you would want it to be a manual review process to thoroughly vet the users. Note that you’ll also want to do this if you happen to be a judge and you may have some nasty personal stuff that some people may find offensive think leads to a conflict of interest in a case that you are trying. But that’s not what we are doing here. We want this to be open and completely self-service, but to still validate that the email is valid and the user has access to it, ensuring that we can communicate with them (or spam them, depending on your viewpoint). We’ve already discussed the whole e-mail-account-security thing … nothing that we can do about that, so we’ll just move on. But how can we further ensure that we have a (relatively) secure method for doing this, even with the whole e-mail security issue? First, we need to make sure that whatever validation code we use is not easy for a bad guy to guess … this does defeat the purpose. How far you go with this will certainly depend a great deal on what the risk is from this failure … for example, if this is a site where you have dog pictures, it’s not that big of a deal. If, however, it’s an ecommerce site, you need to be a bit more cautious. Second, we also need to make sure that the validation code wasn’t intercepted en route. Keep in mind – and a number of devs seem to forget this – SMTP is not a secure protocol. Neither is POP3. They never were; they just weren’t designed for it. (This highlights one of the things that I tell developers a lot … There is no majik security pixii dust … you cannot bolt “security” on at the end of the project; it absolutely must be built in from the initial design and architecture phase.) Everything in SMTP is transmitted in the clear, as is POP3. In fact, if you’re feeling ambitious, you can pop open a telnet client and use it for SMTP and POP3. It’s not the most productive but it is enlightening. These are the two things that come to mind that are unique to this scenario. There are additional things that you need to account for … Sql Injection, XSS and the rest of the usual suspects. Now that I’ve said all of that, I will also tell you that I’m working on a sample that shows some techniques for doing this. When I’m done, I will post it here along with a discussion of what was done and what alternative options are that you can do based on your needs, requirements and risk analysis. So … keep tuned right here for more fun and .Net goodness!

Thoughts on Secure File Downloads

.NET Stuff | Security | Web (and ASP.NET) Stuff
Well, that’s kinda over-simplifying it a bit. It’s more about file downloads and protecting files from folks that shouldn’t see them and comes from some of the discussion last night at the OWASP User Group. So … I was thinking that I’d put a master file-download page for my file repository. The idea around it is that there would be an admin section where I could upload the files, a process that would also put them into the database with the relevant information (name, content type, etc.). This would be an example of one of the vulnerabilities discussed last night … insecure direct object reference. Rather than giving out filenames, etc., it would be a file identifier (OWASP #4). That way, there is no direct object reference. That file id would be handed off to a handler (ASHX) that would actually send the file to the client (just doing a redirect from the handler doesn’t solve the issue at all). But I got to thinking … I might also want to limit access to some files to specific users/logins. So now we are getting into restricting URL access (OWASP #10). If I use the same handler as mentioned above, I can’t use ASP.NET to restrict access, leaving me vulnerable. Certainly, using GUIDs makes them harder to guess, but it won’t prevent UserA, who has access to FileA, sending a link to UserB, who does not have access to FileA.  However, once UserB logged in, there would be nothing to prevent him/her from getting to the file … there is no additional protection above and beyond the indirect object reference and I’m not adequately protecting URL access. This highlights one of the discussion points last night – vulnerabilities often travel in packs. We may look at things like the OWASP Top Ten and identify individual vulnerabilities, but that looks at the issues in isolation. The reality is that you will often have a threat with multiple potential attack vectors from different vulnerabilities. Or you may have a vulnerability that is used to exploit another vulnerability (for example, a Cross-Site Scripting vulnerability that is used to exploit a Cross Site Request Forgery vulnerability and so on and so on). So … what do I do here? Well, I could just not worry about it … the damage potential and level of risk is pretty low but that really just evades the question. It’s much more fun to actually attack this head on and come up with something that mitigates the threat. One method is to have different d/l pages for each role and then protect access to those pages in the web.config file. That would work, but it’s not an ideal solution. When coming up with mitigation strategies, we should also keep usability in mind and to balance usability with our mitigation strategy. This may not be ideal to the purist, but the reality is that we do need to take things like usability and end-user experience into account. Of course, there’s also the additional maintenance that the “simple” method would entail as well – something I’m not really interested in. Our ideal scenario would have 1 download page that would then display the files available to the user based on their identity, whether that is anonymous or authenticated. So … let’s go through how to implement this in a way that mitigates (note … not eliminates but mitigates) the threats. First, the database. Here’s a diagram:                                                               We have the primary table (FileList) and then the FileListXREF table. The second has the file ids and the roles that are allowed to access the file. A file that all are allowed to access will not have any records in this table. To display this list of files for a logged in user, we need to build the Sql statement dynamically, with a where clause based on the roles for the current user. This, by the way, is one of the “excuses” that I’ve heard about using string concatenation for building Sql statements. It’s not a valid one, it just takes some more. And, because we aren’t using concatenation, we’ve also mitigated Sql injection, even though the risk of that is low since the list of roles is coming from a trusted source. Still, it’s easy and it’s better to be safe. So … here’s the code. public static DataTable GetFilesForCurrentUser() { //We'll need this later. List<SqlParameter> paramList = new List<SqlParameter>(); //Add the base Sql. //This includes the "Where" for files for anon users StringBuilder sql = new StringBuilder( "SELECT * FROM FileList " + "WHERE (FileId NOT IN " + "(SELECT FileId FROM FileRoleXREF))"); //Check the user ... IPrincipal crntUser = HttpContext.Current.User; if (crntUser.Identity.IsAuthenticated) { string[] paramNames = GetRoleParamsForUser(paramList, crntUser); //Now add to the Sql sql.Append(" OR (FileId IN (SELECT FileId FROM " + "FileRoleXREF WHERE RoleName IN ("); sql.Append(String.Join(",", paramNames)); sql.Append(")))"); } return GetDataTable(sql.ToString(), paramList); } private static string[] GetRoleParamsForUser(List<SqlParameter> paramList, IPrincipal crntUser) { //Now, add the select for the roles. string[] roleList = Roles.GetRolesForUser(crntUser.Identity.Name); //Create the parameters for the roles string[] paramNames = new string[roleList.Length]; for (int i = 0; i < roleList.Length; i++) { string role = roleList[i]; //Each role is a parameter ... string paramName = "@role" + i.ToString(); paramList.Add(new SqlParameter(paramName, role)); paramNames[i] = paramName; } return paramNames; } From there, creating the command and filling the DataTable is simple enough. I’ll leave that as an exercise for the reader. This still, however, doesn’t protect us from the failure to restrict URL access issue mentioned above. True, UserA only sees the files that he has access to and UserB only sees the files that she has access to. But that’s still not stopping UserA from sending UserB a link to a file that he can access, but she can’t. In order to prevent this, we have to add some additional checking into the ASHX file to validate access. It’d be easy enough to do it with a couple of calls to Sql, but here’s how I do it with a single call … public static bool UserHasAccess(Guid FileId) { //We'll need this later. List<SqlParameter> paramList = new List<SqlParameter>(); //Add the file id parameter paramList.Add(new SqlParameter("@fileId", FileId)); //Add the base Sql. //This includes the "Where" for files for anon users StringBuilder sql = new StringBuilder( "SELECT A.RoleEntries, B.EntriesForRole " + "FROM (SELECT COUNT(*) AS RoleEntries " + "FROM FileRoleXREF X1 " + "WHERE (FileId = @fileId)) AS A CROSS JOIN "); //Check the user ... IPrincipal crntUser = HttpContext.Current.User; if (crntUser.Identity.IsAuthenticated) { sql.Append("(SELECT Count(*) AS EntriesForRole " + "FROM FileRoleXREF AS X2 " + "WHERE (FileId = @fileId) AND " + "RoleName IN ("); string[] roleList = GetRoleParamsForUser(paramList, crntUser); sql.Append(String.Join(",", roleList)); sql.Append(")) B"); } else { sql.Append("(SELECT 0 AS EntriesForRole) B"); } DataTable check = GetDataTable(sql.ToString(), paramList); if ((int)check.Rows[0]["RoleEntries"] == 0) //Anon Access {return true;} else if ((int)check.Rows[0]["EntriesForRole"] > 0) {return true;} else {return false;} } So, this little check before having the handler stream the file to the user makes sure that someone isn’t getting access via URL to something that they shouldn’t have access to. We’ve also added code to ensure that we mitigate any Sql injection errors. Now, I’ve not gotten everything put together in a “full blown usable application”. But … I wanted to show some of the thought process around securing a relatively simple piece of functionality such as this. A bit of creativity in the process is also necessary … you have to think outside the use case, go off the “happy path” to identify attack vectors and the threats represented by the attack vectors.

Content from OWASP User Group

Security | User Groups
I had a blast speaking at the Houston OWASP User Group last night. I did a review of the OWASP Top Ten and we had a lot of good discussion and conversation around secure application development and some of the implications. Though a relatively small group, it was pretty lively and really good to hang with some folks that care deeply and passionately about secure application development. This presentation was one that I had put together a while ago but, while reviewing it for this presentation, I really wasn’t very happy with it. So, of course, I made a number of changes to it and added a bit of stuff. It certainly seems to have gone over very well, so I’m pretty pleased with it now. Still, for those of you that were there, feel free to let me know what could be improved … I think I’ll take this presentation and turn it into a webcast. And, without further ado, here’s the content. Keep in mind that the demos are pretty simple … they really have only enough to show some mitigation strategies for particular vulnerabilities so they aren’t part of an overall application.

Hashing in .Net

.NET Stuff | Security
I've talked about DPAPI and symmetric encryption. Both of these are very good for certain things. But what about passwords? Encrypting them with DPAPI is not ideal ... as DPAPI from ASP.NET would be machine-specific; it won't scale out and it's not easy to transfer between machines if there is a need for disaster recovery. Symmetric encryption can be a reasonable option, but there is a more secure (and faster) way to do this. Let me explain a bit further. Let's say that you forget your Windows domain password. Can you get that password back? No, you can only reset the password. Yes, I know there are password crackers, but they do tend to be brute-force tools or they use tables with known hashes and compare them to what's in the SAM. So, I'm sure you can guess what it is ... hash algorithms (yeah, I guess the title was a giveaway). Hash algorithms have a simple function: the take input text, run it through and algorithm and produce output that cannot be reversed to the original. A small change in the input results in a large change in the output. The output itself will always have the same size, in bits, regardless of the input. So, for example, a 500 character string processed by a 256 bit hash algorithm will always return a 256 bit value.  As would a 1 character string. This is another key difference between hashing and encryption functions. However, the same input will produce the same output ... so it is, as you can certainly guess, a very good way to store passwords. Since it's not reversible, it is very hard, if not impossible, for it to be retrieved except through a brute-force attack. And there are ways to make even a brute force attack even more difficult than they already are; we will touch on that. Hashes can be used for checksums (you'll see MD5 hashes used for checksums on many Linux distribution downloads) ... they can be considered a "digital fingerprint" that ensures the integrity of a downloaded file, zip archive and more; however, there are other algorithms that can also be used for these purposes that are not secure (for example: CRC or cyclic redundancy check).  All of the hash algorithms in .Net inherit from System.Security.Cryptography.HashAlgorithm.  And, of course, you can find them in the System.Security.Cryptography namespace. Hash Algorithms Supported in .Net MD5: This is a widely used 128-bit hash algorithm, especially for validating downloaded files. It is an Internet standard, being described in RFC 1321. However, there are known issues with MD5, with collisions (that is, two different inputs producing the same hash) being having been shown to be found on a laptop computer in a minute. While there are ways to mitigate this, in general it is not recommended for new applications. RipeMD160: This is a 160-bit hash algorithm designed to replace the earlier RipeMD, which was, in turn, based on the now-defunct MD4 (which was replaced by MD5). Like MD4, the original RipeMD was found to have some weaknesses.  RipeMD 160 improves on this, if only because the size is larger. SHA1: Designed by the National Security Agency for use as a Federal Information Publishing Standard (FIPS). This produces a 160-bit hash value. It is in the process of being phased out due to vulnerabilities that have been reported in the algorithm. SHA256, SHA384, SHA512: This family of algorithms is collectively known as SHA2. They have lengths of 256, 294 and 512 bits, respectively. Due to the known issues with SHA1, these algorithms are generally considered more secure. OK, so we have that out of the way. So, let me run something else by you. Remember when I said that the same input produces the same output? That can be problematic, especially with passwords. This is because if you know that one password is, for example P@ssw0rd and see that another entry has the same hash value, then you will know that the second entry is also P@ssw0rd. Symmetric cryptography has a similar issue and, with symmetric crypto, we use an initialization vector to resolve it. But hash algorithms don't have an IV. Instead, with a hash algorithm, we use a salt. This salt is the extra bit of gobbledygook that provide the randomization required to ensure that the above scenario doesn't occur. As with an initialization vector, this can be stored in the clear. But ... it's something that you have to add to the data to be hashed; there are no properties for it as there are with an IV. And now, without further ado, for some code. A note ... I'm passing the name of the algorithm into the function. This isn't necessary, but it does provide some flexibility. You can use the names of the algorithm (above) or you can hard-code the algorithm's class into the function. This first code sample shows hashing in its simplest form ... no salt, nothing special, just a straight hash. The return value is a Base64 Encoded string ... I do like to use these for better (and easier) storage at the database level, though it is at the expense of some CPU cycles on the application logic level. private string HashPasswordSimple(string password, string hashAlg) { //convert the password to bytes with UTF8 Encoding. byte[] passwordBytes = System.Text.Encoding.UTF8.GetBytes(password); //HashAlgorithm is disposable, so we'll use a "using" block using (HashAlgorithm hashAlgorithm = HashAlgorithm.Create(hashAlg)) { byte[] passwordHash = hashAlgorithm.ComputeHash(passwordBytes); //convert the computed hash to a string representation ... string hashString = System.Convert.ToBase64String(passwordHash); return hashString; } } As you can see, there's not that much to it. Pretty straightforward. To verify a password, you recalculate the password's hash and then compare it to the stored value. Adding a salt takes this up a level and, of course, you'll need to the salt somewhere as well. Good thing is that the salt isn't helpful by itself to a bad guy, so you can store it in the clear. Here is one method of using a salt (you can also add it to the end, etc. just a long as you can reproduce it). private static string HashPasswordSalt(string password, byte[] salt, string hashAlg) { //convert the password to bytes with UTF8 Encoding. byte[] passwordBytes = System.Text.Encoding.UTF8.GetBytes(password); //Add the hash to the password bytes. byte[] hashData = new byte[passwordBytes.Length + salt.Length]; //Use Buffer.BlockCopy to copy the salt and password //into a new array that will actually be hashed. Buffer.BlockCopy(salt, 0, hashData, 0, salt.Length); Buffer.BlockCopy(passwordBytes, 0, hashData, salt.Length, passwordBytes.Length); //From here, compute the hash. //HashAlgorithm is disposable, so we'll use a "using" block using (HashAlgorithm hashAlgorithm = HashAlgorithm.Create(hashAlg)) { byte[] passwordHash = hashAlgorithm.ComputeHash(hashData); //convert the computed hash to a string representation ... string hashString = System.Convert.ToBase64String(passwordHash); return hashString; } } The next question, of course, is how to create the salt. There are many ways to go about it as long as it is unique to the individual hash (i.e. the same passwords should not have the same salt ... would defeat the purpose). You can use the System.Security.Cryptography.RandomNumberGenerator class to create the salt. This class generates a cryptographically strong random sequence of values ... just using the System.Random class doesn't do that. You can use a unique identifier associated with the user account (for example) to create the salt ... i.e a user id Guid. You can do any number of things as long as it is unique in the hashing context. In addition to traditional hash algorithms, .Net also has support for keyed hash algorithms.  These take regular hashes a step up and are more commonly called a Hash Message Authentication Code (HMAC).  These algorithms use a hash algorithm in addition to a secret key. This provides not just the data integrity, but also the integrity of the message. Think about it for a second ... if a hash algorithm is repeatable, a hacker could intercept the message, change it, recalculate the hash and you'd be none the wiser. With an HMAC, this is not possible as the key is required to regenerate the hash. A keyed hash algorithm is essential to protect the integrity of a hash value that is transmitted to users (for example, in ASP.NET's ViewState). Keep in mind, however, that you still need to think about protecting the key. With all of that said, .Net does support 2 keyed hash algorithms and they both inherit from System.Security.Cryptography.KeyedHashAlgorithm.  This, of course, inherits from HashAlgorithm. Keyed Hash Algorithms Supported in .Net HMACSHA1: Based on the SHA1 hashing algorithm (and, therefore 160 bits), this adds a key of arbitrary length to the function. MACTripleDES: As it's name implies, that uses the TripleDES algorithm to produce a hash. The keys can be 8, 16 or 24 bytes and generates a 64-bit hash. The only difference between a straight hash algorithm and a keyed hash algorithm is the addition of the key. There isn't a need for a salt with a keyed algorithm; it is used for a different purpose (message authentication and validation) than a regular hash algorithm and, since the HMAC is authentication a message sent in the clear, there really isn't any point to it. An example of using a Keyed Hash Algorithm is in ASP.NET ... the <pages> element has an attribute of "enableViewStateMac".  This has nothing to do with enabling ViewState on Macintosh, but to add a MAC to the ViewState. There is also a page directive that will do this at the page level. The key used can be specified in the <machineKey>; if you have it auto-generated, you run the chance that the ViewState will fail validation when the AppDomain recycles or, if you are using a web farm, the request goes to another web server. That's all for now. Have fun and happy coding!

Protecting Crypto Keys

.NET Stuff | Security
In my last post, I discussed how to work with symmetric encryption. One thing that I mentioned, but didn't go in to, was how to protect the keys for symmetric encryption. Here's the deal: you're using 256-bit Rijndael; you're doing everything right. But what do you do with the key? This is, after all, the key to your encrypted data (pun intended). If a bad guy gets the key, they'll be at your data in no time flat. This, but the way, is the excuse (and it is a poor excuse) that I've most commonly heard to defend a foray into craptology. But let's face it, it is a problem. What, oh what is a security conscious developer to do? Encrypt it with another symmetric algorithm? But then you have the same problem. How do we get ourselves out of this seemingly bottomless pit? Fear not, dear developer. No reason to worry yourself about all of this mess. Since Windows 2000, the Data Protection API (DPAPI) has shipped with Windows, providing a clean solution to this problem. DPAPI is based on TripleDES (see the previous entry) ... but here's the deal. The TripleDES key is based on the Windows profile, is automatically rotated and the key itself exists in memory for only a brief period of time. But honestly, there's no need to worry about the details. It works, it works well and it's also been reviewed by external security experts and is generally considered to be an excellent implementation to solve this difficult problem. Now, before we get to the code on how to use DPAPI, let's talk a little more about the details. First, DPAPI can be associated with a single user account or with the machine account. The user account mode is, in general, more secure; that's because when the machine account is used, anyone with access to the machine can get the data decrypted. But that doesn't mean you should jump right into using the user account mode. When you use the user account mode, you will need to load the user's profile (and desktop) in order to encrypt and decrypt. Now, you can technically do that in a web application (by way of some Win32 API calls via PInvoke), but that is a Very Bad Idea™. So ... user account mode is not good for web applications. It is, however, very good for desktop applications - especially in scenarios where there may be multiple users for the system. It's also good to use with Windows services. In both of these situations, the user profile and desktop is loaded and ready for you. One little thorn that you might run in to is this: you need to access the same encrypted data from multiple machines, but using the same user account. If you read the documentation, you'll see what appears to be a silver bullet to solve this problem ... roaming profiles. However, there be Dragons there. Big, nasty, fire-breathing dragons. Does it work? Yes ... in a perfect world. The problem is this: if the profile is unavailable, for whatever reason, Windows will quite happily create a temporary local profile. Which puts everything out of whack. Completely. (Don't ask how I know this ... I still have the scars.) For both modes, you can add an extra layer of security by adding entropy to the mix. It's just an extra bit of (again) gobbledygook added to the algorithm to ensure greater randomness. You'll see this in the code sample.So, how to use it? In .Net 2.0 and higher, it's actually very easy. In .Net 1.x, you had to directly call the CryptoAPI via PInvoke. There was an implementation on MSDN that you could download and use, which was quite a relief. If you looked at the code, you'll be glad that you never had to write it yourself and your appreciation for crypto in .Net will increase 10 fold. The .Net 2.0 implementation is in (of course) the System.Security.Cryptography namespace, but is not in mscorlib.  It's in System.Security.dll, so if you don't see it, make sure you add it as a reference and all will be well. You have 2 classes in there related to DPAPI: ProtectedData and ProtectedMemory. Their names tell you the difference between them.Here's a code sample of using DPAPI: private string ProtectData(string clearText, string password) { //convert our clear text into a byte array. byte[] clearTextBytes = System.Text.Encoding.UTF8.GetBytes(clearText); //We're going to add some entropy to this. //In this case, we're deriving random bytes from the password. //This is a good way to use passwords in a more secure manner. System.Security.Cryptography.PasswordDeriveBytes pwd = new System.Security.Cryptography.PasswordDeriveBytes(password, null); byte[] entropy = pwd.GetBytes(16); //Do the encryption //Notice that it is a static method. byte[] cipherText = System.Security.Cryptography.ProtectedData.Protect( clearText, entropy, System.Security.Cryptography.DataProtectionScope.CurrentUser); //write to the label. return Convert.ToBase64String(cipherText); } So ... not to hard, is it?  Now, before you go off encrypting your keys for your web.config files, I must mention one more little thing: ASP.Net 2.0 will actually encrypt sections of the web.config file for you as well as handle the encryption invisibly - you just continue to use the configuration API's like you always have. One way to do this is to use the aspnet_regiis command-line tool. You can read the docs on that on MSDN. More interesting to me, however, is the ability to do this in code. And, while the aspnet_regiis utility only works on web applications, doing this in code will work with every application. And so, without further ado, here's the code: static public void EncryptConnectionStrings() { // Get the current configuration file. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // Get the section. UrlsSection section = (UrlsSection)config.GetSection("connectionStrings"); // Protect (encrypt) the section. section.SectionInformation.ProtectSection("DpapiProtectedConfigurationProvider"); // Save the encrypted section. section.SectionInformation.ForceSave = true; //And then save the config file. config.Save(ConfigurationSaveMode.Full); }

Notes on Symmetric Cryptography

.NET Stuff | Security
Howdy y'all.  Me again.  I've gotten a lot of questions about doing crypto in .Net ... for some reason, it's been something that interests me quite a bit.  Now, there are a bunch of resources out there on this, but it's (apparently) not always easy to find.  So, I'm going to put some tips and thoughts here.  First, let me say this: .Net has awesome support for crypto. This support is in the System.Security.Cryptography namespace (or a sub-namespace under that) with most of the classes implemented in mscorlib.   I'm going to focus on symmetric encryption here (I'll deal with the others later) Symmetric encryption is reversable (you can get the clear text from the crypto text) and is based on a single key. There are several symmetric algorithms included with .Net, and all of their implementation classes derive from System.Security.Cryptography.SymmetricAlgorithm abstract class: DES (Data Encryption Standard) (FX 1.0+) : This was the Federal Information Processing Standard (FIPS) starting in 1976. It has a 56-bit key, so with today's modern computers, it is subject to a brute-force attack in a trivial amount of time. It's not recommended for general usage anymore, but it has been so widely used for so long that it's not wise to not include it. TripleDES (FX 1.0+): Also commonly referred to a 3DES.  Basically, as it's name implies, it's DES 3 times over.  There are (usually) 3 DES keys and the cipher is run through the three keys on successive passes.  There are actually several variations on the theme that are out and about, some using 2 keys, some using 1 key but, in general, the most common method is three keys.  Rijndael (FX 1.0+): This was the algorithm that as become the Advanced Encryption Standard (AES) and is the replacement for 3DES.  It supports 128, 192 and 256 bit keys. To put this in perspective, if a machine could recover a DES key in a second (using brute force), it would take approximately 149 trillion years to crack a 128-bit AES key (see  It was the finalist in an exhaustive analysis process by the National Institues of Standards and Technology (NIST) with input from the US National Security Agency (or No Such Agency, depending on your viewpoint) to determine the next FIPS algorithm.  It was selected for its high level of security as well as it's efficiency on modern processors (DES and 3DES were notoriously inefficient).  The other algorithms were considered secure enough for non-classified information, but only Rijndael was considered secure enough for classified information.  For details on the algorithm, see  Now, if you understand that stuff, let me know.  Perhaps you could explain it to me in English. AES (Fx 3.5): This is a FIPS-certified implementation of Rijndael.  And yes, this is a big deal, especially for organizations that deal with the US government and, particularly, the DoD. For classified information, the key must be 192 or 256 bits. Now, because the all derive from the same base class, using them is pretty much the same (with the exception of key sizes).  Here's a code sample (with comments): public static byte[] EncryptText(string clearText) { //Create our algorithm. using (SymmetricAlgorithm alg = Rijndael.Create()) { //Can also use: //SymmetricAlgorithm alg = SymmetricAlgorithm.Create("Rijndael"); //For clarity, we'll generate the key. //In the real world, you'll likely get this from ... somewhere ... alg.GenerateKey(); //An initialization vector is important for true security of the algorithm. alg.GenerateIV(); //Create our output stream for the cipherText //We're using a memory stream here, but you can use //any writable stream (i.e. FileStream) System.IO.Stream outputStream = new System.IO.MemoryStream(); //Create the crypto stream that the algorithm will use. CryptoStream crypStream = new CryptoStream(outputStream, alg.CreateEncryptor(), CryptoStreamMode.Write); //Now we need to read from the stream //This will be a stream reader that reads from the crypto stream. //This writer will write to the CryptoStream using (System.IO.StreamWriter inputWriter = new System.IO.StreamWriter(crypStream)) { //Write to the stream writer ... this writes to the underlying CryptoStream inputWriter.Write(clearText); //Not usually necessary, but just to be sure. inputWriter.Flush(); } //The encrypted data is now ready to read. //If we were using, say, a FileStream for the output, we wouldn't need to do this. //Create a binary reader to read it into a byte array. using (System.IO.BinaryReader outputReader = new System.IO.BinaryReader(outputStream)) { //Read the bytes. byte[] cipherBytes = new byte[outputStream.Length]; int dataRead = outputReader.Read(cipherBytes, 0, cipherBytes.Length); } //Make sure we close the other streams. outputStream.Close(); crypStream.Close(); //... and return ... return cipherBytes; } } So ... the comments do tell a lot of the story, but not all.  What is an IV?  No, it's not a needle ... it's an initialization vector.  This is an extra bit of random gobbledygook that is added into the beginning of the clear text before it is run through the cipher.  This is actually very important to do.  You see, these algorithms are block ciphers, meaning that they encrypt blocks at a time.  By default, they are done in CipherBlockChaining (CBC) mode, where some of the previous block of cipher text is fed into the next block.  This helps increase the randomness of the cipher text.  However, if the clear text starts with the same pattern (not very uncommon), then the beginning of the cipher text will also be the same.  Not good as it helps a bad guy reduce the key space.  So ... the IV prevents that from happening. You can store the IV separately from the cipher text (in the clear ... bad guys can't get anything useful from it) or you can prepend the returned cipher text with it (so the return is [IV][cipher text]).  I prefer the second ... it's a touch of security by obscurity (this isn't bad as long as it's not the only thing that you rely on ... it can be a part of a complete defense-in-depth strategy). Decrypting is very similar ... the same process (and almost the same code) in reverse.  Here's a sample with less comments (many of the comments above also apply here). public static string DecryptText(byte[] cipherText) { using (SymmetricAlgorithm alg = Rijndael.Create()) { //These will come from somewhere. alg.Key = ourKey; alg.IV = ourIV; System.IO.Stream outputStream = new System.IO.MemoryStream(); //Create the crypto stream. //This is the biggest difference between encryption and decryption. CryptoStream crypStream = new CryptoStream(outputStream, alg.CreateEncryptor(), CryptoStreamMode.Read); //Also a slightly different write because we have bytes to write. using (System.IO.BinaryWriter inputWriter = new System.IO.BinaryWriter(crypStream)) { inputWriter.Write(cipherText); inputWriter.Flush(); } //The decrypted data is now ready to read. //If we were using, say, a FileStream for the output, we wouldn't need to do this. //Create a binary reader to read it into a byte array. string clearText; using (System.IO.StringReader outputReader = new System.IO.StringReader(outputStream)) { clearText = outputReader.ReadToEnd(); } //Make sure we close the other streams. outputStream.Close(); crypStream.Close(); //... and return ... return clearText; } } Some final comments: I like to have all of the disposable objects in their own using blocks.  I didn't do that here to minimize the nesting of the using blocks for the sake of clarity.  That said, I'm a big fan of using blocks.  That's my story and I'm sticking to it. I didn't talk about key storage.  That's the stickiest part of using symmetric algorithms.  I'll deal with that in a later post. Here's a clue: DPAPI. If you notice, the encrypt and the decrypt functions are almost identical.  Yes, it is possible to have both operations in one function, with a bool indicating encryption/decryption. I, personally, like to do this.  I did not do that here for the sake of clarity and a clear separation between the two processes.  I'm sure you can look at the samples above and make that happen. You can store the byte arrays as text/string.  To do that, use this snippet: string cipherString = System.Convert.ToBase64String(data); This really is pretty easy.  It's very straightforward.  If you think it's hard, try reading the documentation for the Win32 CryptoAPI.  It's called the CryptoAPI that because it's cryptic.  It will make your brain hurt.  Badly.  I recommend a heavy dose of Advil after reading it.  You'll need it. Use one of these algorithms.  I do prefer Rijndael/AES, but any of these (even DES) is better than creating your own "crypto algorithm".  In the words of Michael Howard, that's craptography.  Just say no.  Don't do it.  Unless you are a PhD in Mathematics specializing in crypto algorithms, you'll get it wrong. Read the Rijndael article referenced above. If you can't understand it ... don't write your own algorithm.  It's just that simple.  Even if you do understand it, it's still not a good idea to write your own algorithm.  Just use Rijndael. It's been well vetted and just because the algorithm is known doesn't mean that it's less secure.  On characteric of a good algorithm is that the algorithm details can be public without compromising the security of the algorithm.