Ruminations of J.net idle rants and ramblings of a code monkey

Thoughts on Secure File Downloads

.NET Stuff | Security | Web (and ASP.NET) Stuff
Well, that’s kinda over-simplifying it a bit. It’s more about file downloads and protecting files from folks that shouldn’t see them and comes from some of the discussion last night at the OWASP User Group. So … I was thinking that I’d put a master file-download page for my file repository. The idea around it is that there would be an admin section where I could upload the files, a process that would also put them into the database with the relevant information (name, content type, etc.). This would be an example of one of the vulnerabilities discussed last night … insecure direct object reference. Rather than giving out filenames, etc., it would be a file identifier (OWASP #4). That way, there is no direct object reference. That file id would be handed off to a handler (ASHX) that would actually send the file to the client (just doing a redirect from the handler doesn’t solve the issue at all). But I got to thinking … I might also want to limit access to some files to specific users/logins. So now we are getting into restricting URL access (OWASP #10). If I use the same handler as mentioned above, I can’t use ASP.NET to restrict access, leaving me vulnerable. Certainly, using GUIDs makes them harder to guess, but it won’t prevent UserA, who has access to FileA, sending a link to UserB, who does not have access to FileA.  However, once UserB logged in, there would be nothing to prevent him/her from getting to the file … there is no additional protection above and beyond the indirect object reference and I’m not adequately protecting URL access. This highlights one of the discussion points last night – vulnerabilities often travel in packs. We may look at things like the OWASP Top Ten and identify individual vulnerabilities, but that looks at the issues in isolation. The reality is that you will often have a threat with multiple potential attack vectors from different vulnerabilities. Or you may have a vulnerability that is used to exploit another vulnerability (for example, a Cross-Site Scripting vulnerability that is used to exploit a Cross Site Request Forgery vulnerability and so on and so on). So … what do I do here? Well, I could just not worry about it … the damage potential and level of risk is pretty low but that really just evades the question. It’s much more fun to actually attack this head on and come up with something that mitigates the threat. One method is to have different d/l pages for each role and then protect access to those pages in the web.config file. That would work, but it’s not an ideal solution. When coming up with mitigation strategies, we should also keep usability in mind and to balance usability with our mitigation strategy. This may not be ideal to the purist, but the reality is that we do need to take things like usability and end-user experience into account. Of course, there’s also the additional maintenance that the “simple” method would entail as well – something I’m not really interested in. Our ideal scenario would have 1 download page that would then display the files available to the user based on their identity, whether that is anonymous or authenticated. So … let’s go through how to implement this in a way that mitigates (note … not eliminates but mitigates) the threats. First, the database. Here’s a diagram:                                                               We have the primary table (FileList) and then the FileListXREF table. The second has the file ids and the roles that are allowed to access the file. A file that all are allowed to access will not have any records in this table. To display this list of files for a logged in user, we need to build the Sql statement dynamically, with a where clause based on the roles for the current user. This, by the way, is one of the “excuses” that I’ve heard about using string concatenation for building Sql statements. It’s not a valid one, it just takes some more. And, because we aren’t using concatenation, we’ve also mitigated Sql injection, even though the risk of that is low since the list of roles is coming from a trusted source. Still, it’s easy and it’s better to be safe. So … here’s the code. public static DataTable GetFilesForCurrentUser() { //We'll need this later. List<SqlParameter> paramList = new List<SqlParameter>(); //Add the base Sql. //This includes the "Where" for files for anon users StringBuilder sql = new StringBuilder( "SELECT * FROM FileList " + "WHERE (FileId NOT IN " + "(SELECT FileId FROM FileRoleXREF))"); //Check the user ... IPrincipal crntUser = HttpContext.Current.User; if (crntUser.Identity.IsAuthenticated) { string[] paramNames = GetRoleParamsForUser(paramList, crntUser); //Now add to the Sql sql.Append(" OR (FileId IN (SELECT FileId FROM " + "FileRoleXREF WHERE RoleName IN ("); sql.Append(String.Join(",", paramNames)); sql.Append(")))"); } return GetDataTable(sql.ToString(), paramList); } private static string[] GetRoleParamsForUser(List<SqlParameter> paramList, IPrincipal crntUser) { //Now, add the select for the roles. string[] roleList = Roles.GetRolesForUser(crntUser.Identity.Name); //Create the parameters for the roles string[] paramNames = new string[roleList.Length]; for (int i = 0; i < roleList.Length; i++) { string role = roleList[i]; //Each role is a parameter ... string paramName = "@role" + i.ToString(); paramList.Add(new SqlParameter(paramName, role)); paramNames[i] = paramName; } return paramNames; } From there, creating the command and filling the DataTable is simple enough. I’ll leave that as an exercise for the reader. This still, however, doesn’t protect us from the failure to restrict URL access issue mentioned above. True, UserA only sees the files that he has access to and UserB only sees the files that she has access to. But that’s still not stopping UserA from sending UserB a link to a file that he can access, but she can’t. In order to prevent this, we have to add some additional checking into the ASHX file to validate access. It’d be easy enough to do it with a couple of calls to Sql, but here’s how I do it with a single call … public static bool UserHasAccess(Guid FileId) { //We'll need this later. List<SqlParameter> paramList = new List<SqlParameter>(); //Add the file id parameter paramList.Add(new SqlParameter("@fileId", FileId)); //Add the base Sql. //This includes the "Where" for files for anon users StringBuilder sql = new StringBuilder( "SELECT A.RoleEntries, B.EntriesForRole " + "FROM (SELECT COUNT(*) AS RoleEntries " + "FROM FileRoleXREF X1 " + "WHERE (FileId = @fileId)) AS A CROSS JOIN "); //Check the user ... IPrincipal crntUser = HttpContext.Current.User; if (crntUser.Identity.IsAuthenticated) { sql.Append("(SELECT Count(*) AS EntriesForRole " + "FROM FileRoleXREF AS X2 " + "WHERE (FileId = @fileId) AND " + "RoleName IN ("); string[] roleList = GetRoleParamsForUser(paramList, crntUser); sql.Append(String.Join(",", roleList)); sql.Append(")) B"); } else { sql.Append("(SELECT 0 AS EntriesForRole) B"); } DataTable check = GetDataTable(sql.ToString(), paramList); if ((int)check.Rows[0]["RoleEntries"] == 0) //Anon Access {return true;} else if ((int)check.Rows[0]["EntriesForRole"] > 0) {return true;} else {return false;} } So, this little check before having the handler stream the file to the user makes sure that someone isn’t getting access via URL to something that they shouldn’t have access to. We’ve also added code to ensure that we mitigate any Sql injection errors. Now, I’ve not gotten everything put together in a “full blown usable application”. But … I wanted to show some of the thought process around securing a relatively simple piece of functionality such as this. A bit of creativity in the process is also necessary … you have to think outside the use case, go off the “happy path” to identify attack vectors and the threats represented by the attack vectors.

Hashing in .Net

.NET Stuff | Security
I've talked about DPAPI and symmetric encryption. Both of these are very good for certain things. But what about passwords? Encrypting them with DPAPI is not ideal ... as DPAPI from ASP.NET would be machine-specific; it won't scale out and it's not easy to transfer between machines if there is a need for disaster recovery. Symmetric encryption can be a reasonable option, but there is a more secure (and faster) way to do this. Let me explain a bit further. Let's say that you forget your Windows domain password. Can you get that password back? No, you can only reset the password. Yes, I know there are password crackers, but they do tend to be brute-force tools or they use tables with known hashes and compare them to what's in the SAM. So, I'm sure you can guess what it is ... hash algorithms (yeah, I guess the title was a giveaway). Hash algorithms have a simple function: the take input text, run it through and algorithm and produce output that cannot be reversed to the original. A small change in the input results in a large change in the output. The output itself will always have the same size, in bits, regardless of the input. So, for example, a 500 character string processed by a 256 bit hash algorithm will always return a 256 bit value.  As would a 1 character string. This is another key difference between hashing and encryption functions. However, the same input will produce the same output ... so it is, as you can certainly guess, a very good way to store passwords. Since it's not reversible, it is very hard, if not impossible, for it to be retrieved except through a brute-force attack. And there are ways to make even a brute force attack even more difficult than they already are; we will touch on that. Hashes can be used for checksums (you'll see MD5 hashes used for checksums on many Linux distribution downloads) ... they can be considered a "digital fingerprint" that ensures the integrity of a downloaded file, zip archive and more; however, there are other algorithms that can also be used for these purposes that are not secure (for example: CRC or cyclic redundancy check).  All of the hash algorithms in .Net inherit from System.Security.Cryptography.HashAlgorithm.  And, of course, you can find them in the System.Security.Cryptography namespace. Hash Algorithms Supported in .Net MD5: This is a widely used 128-bit hash algorithm, especially for validating downloaded files. It is an Internet standard, being described in RFC 1321. However, there are known issues with MD5, with collisions (that is, two different inputs producing the same hash) being having been shown to be found on a laptop computer in a minute. While there are ways to mitigate this, in general it is not recommended for new applications. RipeMD160: This is a 160-bit hash algorithm designed to replace the earlier RipeMD, which was, in turn, based on the now-defunct MD4 (which was replaced by MD5). Like MD4, the original RipeMD was found to have some weaknesses.  RipeMD 160 improves on this, if only because the size is larger. SHA1: Designed by the National Security Agency for use as a Federal Information Publishing Standard (FIPS). This produces a 160-bit hash value. It is in the process of being phased out due to vulnerabilities that have been reported in the algorithm. SHA256, SHA384, SHA512: This family of algorithms is collectively known as SHA2. They have lengths of 256, 294 and 512 bits, respectively. Due to the known issues with SHA1, these algorithms are generally considered more secure. OK, so we have that out of the way. So, let me run something else by you. Remember when I said that the same input produces the same output? That can be problematic, especially with passwords. This is because if you know that one password is, for example P@ssw0rd and see that another entry has the same hash value, then you will know that the second entry is also P@ssw0rd. Symmetric cryptography has a similar issue and, with symmetric crypto, we use an initialization vector to resolve it. But hash algorithms don't have an IV. Instead, with a hash algorithm, we use a salt. This salt is the extra bit of gobbledygook that provide the randomization required to ensure that the above scenario doesn't occur. As with an initialization vector, this can be stored in the clear. But ... it's something that you have to add to the data to be hashed; there are no properties for it as there are with an IV. And now, without further ado, for some code. A note ... I'm passing the name of the algorithm into the function. This isn't necessary, but it does provide some flexibility. You can use the names of the algorithm (above) or you can hard-code the algorithm's class into the function. This first code sample shows hashing in its simplest form ... no salt, nothing special, just a straight hash. The return value is a Base64 Encoded string ... I do like to use these for better (and easier) storage at the database level, though it is at the expense of some CPU cycles on the application logic level. private string HashPasswordSimple(string password, string hashAlg) { //convert the password to bytes with UTF8 Encoding. byte[] passwordBytes = System.Text.Encoding.UTF8.GetBytes(password); //HashAlgorithm is disposable, so we'll use a "using" block using (HashAlgorithm hashAlgorithm = HashAlgorithm.Create(hashAlg)) { byte[] passwordHash = hashAlgorithm.ComputeHash(passwordBytes); //convert the computed hash to a string representation ... string hashString = System.Convert.ToBase64String(passwordHash); return hashString; } } As you can see, there's not that much to it. Pretty straightforward. To verify a password, you recalculate the password's hash and then compare it to the stored value. Adding a salt takes this up a level and, of course, you'll need to the salt somewhere as well. Good thing is that the salt isn't helpful by itself to a bad guy, so you can store it in the clear. Here is one method of using a salt (you can also add it to the end, etc. just a long as you can reproduce it). private static string HashPasswordSalt(string password, byte[] salt, string hashAlg) { //convert the password to bytes with UTF8 Encoding. byte[] passwordBytes = System.Text.Encoding.UTF8.GetBytes(password); //Add the hash to the password bytes. byte[] hashData = new byte[passwordBytes.Length + salt.Length]; //Use Buffer.BlockCopy to copy the salt and password //into a new array that will actually be hashed. Buffer.BlockCopy(salt, 0, hashData, 0, salt.Length); Buffer.BlockCopy(passwordBytes, 0, hashData, salt.Length, passwordBytes.Length); //From here, compute the hash. //HashAlgorithm is disposable, so we'll use a "using" block using (HashAlgorithm hashAlgorithm = HashAlgorithm.Create(hashAlg)) { byte[] passwordHash = hashAlgorithm.ComputeHash(hashData); //convert the computed hash to a string representation ... string hashString = System.Convert.ToBase64String(passwordHash); return hashString; } } The next question, of course, is how to create the salt. There are many ways to go about it as long as it is unique to the individual hash (i.e. the same passwords should not have the same salt ... would defeat the purpose). You can use the System.Security.Cryptography.RandomNumberGenerator class to create the salt. This class generates a cryptographically strong random sequence of values ... just using the System.Random class doesn't do that. You can use a unique identifier associated with the user account (for example) to create the salt ... i.e a user id Guid. You can do any number of things as long as it is unique in the hashing context. In addition to traditional hash algorithms, .Net also has support for keyed hash algorithms.  These take regular hashes a step up and are more commonly called a Hash Message Authentication Code (HMAC).  These algorithms use a hash algorithm in addition to a secret key. This provides not just the data integrity, but also the integrity of the message. Think about it for a second ... if a hash algorithm is repeatable, a hacker could intercept the message, change it, recalculate the hash and you'd be none the wiser. With an HMAC, this is not possible as the key is required to regenerate the hash. A keyed hash algorithm is essential to protect the integrity of a hash value that is transmitted to users (for example, in ASP.NET's ViewState). Keep in mind, however, that you still need to think about protecting the key. With all of that said, .Net does support 2 keyed hash algorithms and they both inherit from System.Security.Cryptography.KeyedHashAlgorithm.  This, of course, inherits from HashAlgorithm. Keyed Hash Algorithms Supported in .Net HMACSHA1: Based on the SHA1 hashing algorithm (and, therefore 160 bits), this adds a key of arbitrary length to the function. MACTripleDES: As it's name implies, that uses the TripleDES algorithm to produce a hash. The keys can be 8, 16 or 24 bytes and generates a 64-bit hash. The only difference between a straight hash algorithm and a keyed hash algorithm is the addition of the key. There isn't a need for a salt with a keyed algorithm; it is used for a different purpose (message authentication and validation) than a regular hash algorithm and, since the HMAC is authentication a message sent in the clear, there really isn't any point to it. An example of using a Keyed Hash Algorithm is in ASP.NET ... the <pages> element has an attribute of "enableViewStateMac".  This has nothing to do with enabling ViewState on Macintosh, but to add a MAC to the ViewState. There is also a page directive that will do this at the page level. The key used can be specified in the <machineKey>; if you have it auto-generated, you run the chance that the ViewState will fail validation when the AppDomain recycles or, if you are using a web farm, the request goes to another web server. That's all for now. Have fun and happy coding!

More on Membership (from Zain's Presentation yesterday)

.NET Stuff | Web (and ASP.NET) Stuff
When I posted the notes yesterday from Zain's presentation, I posted something about adding users in code ... but I only mentioned the CreateUser() method.  Well, I just got an email from the person that requested this little tip ... with a little curve thrown in.  They also needed to add some profile information.  So, I sat down and wrote a little sample of how to do this (liberally sprinkled with comments).  For those that attended my peformance session in Second Life, there's another little perf tip in here that I forgot to put into my presentation (it's going in now though).  Here it is ...     //Some assumptions ...    //You have all the user information in a data reader    //It actually doesn't matter where it comes from    //but using a reader makes the sample easier. :)    //The using statements that are important here:    //using System.Web.Security;    //using System.Web.Profile;    public static void AddUsers(SqlDataReader rdr)    {        //Set the script timeout to 10 minutes ...         //Use a reasonable value        HttpContext.Current.Server.ScriptTimeout = 60000;        //And we will get the indexes for the fields that we want        //This is a good deal more efficient than rdr["FieldName"]        //but ... you *must* do it before entering the loop.        //It's also type safe as well. :-)        int userId = rdr.GetOrdinal("UserID");        int password = rdr.GetOrdinal("Password");        int email = rdr.GetOrdinal("Reader");        int favColor = rdr.GetOrdinal("FavoriteColor");        int height = rdr.GetOrdinal("Height");         int street = rdr.GetOrdinal("Street");         while (rdr.Read())        {            //Add the users ...            //I'm using one of the overloads here ... there are several.             //This returns the membership user that was created;            MembershipUser usr = Membership.CreateUser(rdr.GetString(userId),                 rdr.GetString(password), rdr.GetString(email));            //Now you will need to create the profile.              ProfileBase prof = System.Web.Profile.ProfileBase.Create(usr.UserName);            //And I'm making up some properties here.             //These will need to be configured.             prof.PropertyValues["FavoriteColor"].PropertyValue = rdr.GetString(favColor);            prof.PropertyValues["Height"].PropertyValue = rdr.GetString(height);             //And, if you're using groups             //(A good idea if you have a lot of properties)            ProfileGroupBase addrGroup = prof.GetProfileGroup("Address");             addrGroup.SetPropertyValue("Street", rdr.GetString(street));            //Make sure that the profile is saved            prof.Save();        }    }     And ... that's all folks!

Protecting Crypto Keys

.NET Stuff | Security
In my last post, I discussed how to work with symmetric encryption. One thing that I mentioned, but didn't go in to, was how to protect the keys for symmetric encryption. Here's the deal: you're using 256-bit Rijndael; you're doing everything right. But what do you do with the key? This is, after all, the key to your encrypted data (pun intended). If a bad guy gets the key, they'll be at your data in no time flat. This, but the way, is the excuse (and it is a poor excuse) that I've most commonly heard to defend a foray into craptology. But let's face it, it is a problem. What, oh what is a security conscious developer to do? Encrypt it with another symmetric algorithm? But then you have the same problem. How do we get ourselves out of this seemingly bottomless pit? Fear not, dear developer. No reason to worry yourself about all of this mess. Since Windows 2000, the Data Protection API (DPAPI) has shipped with Windows, providing a clean solution to this problem. DPAPI is based on TripleDES (see the previous entry) ... but here's the deal. The TripleDES key is based on the Windows profile, is automatically rotated and the key itself exists in memory for only a brief period of time. But honestly, there's no need to worry about the details. It works, it works well and it's also been reviewed by external security experts and is generally considered to be an excellent implementation to solve this difficult problem. Now, before we get to the code on how to use DPAPI, let's talk a little more about the details. First, DPAPI can be associated with a single user account or with the machine account. The user account mode is, in general, more secure; that's because when the machine account is used, anyone with access to the machine can get the data decrypted. But that doesn't mean you should jump right into using the user account mode. When you use the user account mode, you will need to load the user's profile (and desktop) in order to encrypt and decrypt. Now, you can technically do that in a web application (by way of some Win32 API calls via PInvoke), but that is a Very Bad Idea™. So ... user account mode is not good for web applications. It is, however, very good for desktop applications - especially in scenarios where there may be multiple users for the system. It's also good to use with Windows services. In both of these situations, the user profile and desktop is loaded and ready for you. One little thorn that you might run in to is this: you need to access the same encrypted data from multiple machines, but using the same user account. If you read the documentation, you'll see what appears to be a silver bullet to solve this problem ... roaming profiles. However, there be Dragons there. Big, nasty, fire-breathing dragons. Does it work? Yes ... in a perfect world. The problem is this: if the profile is unavailable, for whatever reason, Windows will quite happily create a temporary local profile. Which puts everything out of whack. Completely. (Don't ask how I know this ... I still have the scars.) For both modes, you can add an extra layer of security by adding entropy to the mix. It's just an extra bit of (again) gobbledygook added to the algorithm to ensure greater randomness. You'll see this in the code sample.So, how to use it? In .Net 2.0 and higher, it's actually very easy. In .Net 1.x, you had to directly call the CryptoAPI via PInvoke. There was an implementation on MSDN that you could download and use, which was quite a relief. If you looked at the code, you'll be glad that you never had to write it yourself and your appreciation for crypto in .Net will increase 10 fold. The .Net 2.0 implementation is in (of course) the System.Security.Cryptography namespace, but is not in mscorlib.  It's in System.Security.dll, so if you don't see it, make sure you add it as a reference and all will be well. You have 2 classes in there related to DPAPI: ProtectedData and ProtectedMemory. Their names tell you the difference between them.Here's a code sample of using DPAPI: private string ProtectData(string clearText, string password) { //convert our clear text into a byte array. byte[] clearTextBytes = System.Text.Encoding.UTF8.GetBytes(clearText); //We're going to add some entropy to this. //In this case, we're deriving random bytes from the password. //This is a good way to use passwords in a more secure manner. System.Security.Cryptography.PasswordDeriveBytes pwd = new System.Security.Cryptography.PasswordDeriveBytes(password, null); byte[] entropy = pwd.GetBytes(16); //Do the encryption //Notice that it is a static method. byte[] cipherText = System.Security.Cryptography.ProtectedData.Protect( clearText, entropy, System.Security.Cryptography.DataProtectionScope.CurrentUser); //write to the label. return Convert.ToBase64String(cipherText); } So ... not to hard, is it?  Now, before you go off encrypting your keys for your web.config files, I must mention one more little thing: ASP.Net 2.0 will actually encrypt sections of the web.config file for you as well as handle the encryption invisibly - you just continue to use the configuration API's like you always have. One way to do this is to use the aspnet_regiis command-line tool. You can read the docs on that on MSDN. More interesting to me, however, is the ability to do this in code. And, while the aspnet_regiis utility only works on web applications, doing this in code will work with every application. And so, without further ado, here's the code: static public void EncryptConnectionStrings() { // Get the current configuration file. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // Get the section. UrlsSection section = (UrlsSection)config.GetSection("connectionStrings"); // Protect (encrypt) the section. section.SectionInformation.ProtectSection("DpapiProtectedConfigurationProvider"); // Save the encrypted section. section.SectionInformation.ForceSave = true; //And then save the config file. config.Save(ConfigurationSaveMode.Full); }

Notes on Symmetric Cryptography

.NET Stuff | Security
Howdy y'all.  Me again.  I've gotten a lot of questions about doing crypto in .Net ... for some reason, it's been something that interests me quite a bit.  Now, there are a bunch of resources out there on this, but it's (apparently) not always easy to find.  So, I'm going to put some tips and thoughts here.  First, let me say this: .Net has awesome support for crypto. This support is in the System.Security.Cryptography namespace (or a sub-namespace under that) with most of the classes implemented in mscorlib.   I'm going to focus on symmetric encryption here (I'll deal with the others later) Symmetric encryption is reversable (you can get the clear text from the crypto text) and is based on a single key. There are several symmetric algorithms included with .Net, and all of their implementation classes derive from System.Security.Cryptography.SymmetricAlgorithm abstract class: DES (Data Encryption Standard) (FX 1.0+) : This was the Federal Information Processing Standard (FIPS) starting in 1976. It has a 56-bit key, so with today's modern computers, it is subject to a brute-force attack in a trivial amount of time. It's not recommended for general usage anymore, but it has been so widely used for so long that it's not wise to not include it. TripleDES (FX 1.0+): Also commonly referred to a 3DES.  Basically, as it's name implies, it's DES 3 times over.  There are (usually) 3 DES keys and the cipher is run through the three keys on successive passes.  There are actually several variations on the theme that are out and about, some using 2 keys, some using 1 key but, in general, the most common method is three keys.  Rijndael (FX 1.0+): This was the algorithm that as become the Advanced Encryption Standard (AES) and is the replacement for 3DES.  It supports 128, 192 and 256 bit keys. To put this in perspective, if a machine could recover a DES key in a second (using brute force), it would take approximately 149 trillion years to crack a 128-bit AES key (see http://csrc.nist.gov/archive/aes/index.html)  It was the finalist in an exhaustive analysis process by the National Institues of Standards and Technology (NIST) with input from the US National Security Agency (or No Such Agency, depending on your viewpoint) to determine the next FIPS algorithm.  It was selected for its high level of security as well as it's efficiency on modern processors (DES and 3DES were notoriously inefficient).  The other algorithms were considered secure enough for non-classified information, but only Rijndael was considered secure enough for classified information.  For details on the algorithm, see http://www.csrc.nist.gov/publications/fips/fips197/fips-197.pdf.  Now, if you understand that stuff, let me know.  Perhaps you could explain it to me in English. AES (Fx 3.5): This is a FIPS-certified implementation of Rijndael.  And yes, this is a big deal, especially for organizations that deal with the US government and, particularly, the DoD. For classified information, the key must be 192 or 256 bits. Now, because the all derive from the same base class, using them is pretty much the same (with the exception of key sizes).  Here's a code sample (with comments): public static byte[] EncryptText(string clearText) { //Create our algorithm. using (SymmetricAlgorithm alg = Rijndael.Create()) { //Can also use: //SymmetricAlgorithm alg = SymmetricAlgorithm.Create("Rijndael"); //For clarity, we'll generate the key. //In the real world, you'll likely get this from ... somewhere ... alg.GenerateKey(); //An initialization vector is important for true security of the algorithm. alg.GenerateIV(); //Create our output stream for the cipherText //We're using a memory stream here, but you can use //any writable stream (i.e. FileStream) System.IO.Stream outputStream = new System.IO.MemoryStream(); //Create the crypto stream that the algorithm will use. CryptoStream crypStream = new CryptoStream(outputStream, alg.CreateEncryptor(), CryptoStreamMode.Write); //Now we need to read from the stream //This will be a stream reader that reads from the crypto stream. //This writer will write to the CryptoStream using (System.IO.StreamWriter inputWriter = new System.IO.StreamWriter(crypStream)) { //Write to the stream writer ... this writes to the underlying CryptoStream inputWriter.Write(clearText); //Not usually necessary, but just to be sure. inputWriter.Flush(); } //The encrypted data is now ready to read. //If we were using, say, a FileStream for the output, we wouldn't need to do this. //Create a binary reader to read it into a byte array. using (System.IO.BinaryReader outputReader = new System.IO.BinaryReader(outputStream)) { //Read the bytes. byte[] cipherBytes = new byte[outputStream.Length]; int dataRead = outputReader.Read(cipherBytes, 0, cipherBytes.Length); } //Make sure we close the other streams. outputStream.Close(); crypStream.Close(); //... and return ... return cipherBytes; } } So ... the comments do tell a lot of the story, but not all.  What is an IV?  No, it's not a needle ... it's an initialization vector.  This is an extra bit of random gobbledygook that is added into the beginning of the clear text before it is run through the cipher.  This is actually very important to do.  You see, these algorithms are block ciphers, meaning that they encrypt blocks at a time.  By default, they are done in CipherBlockChaining (CBC) mode, where some of the previous block of cipher text is fed into the next block.  This helps increase the randomness of the cipher text.  However, if the clear text starts with the same pattern (not very uncommon), then the beginning of the cipher text will also be the same.  Not good as it helps a bad guy reduce the key space.  So ... the IV prevents that from happening. You can store the IV separately from the cipher text (in the clear ... bad guys can't get anything useful from it) or you can prepend the returned cipher text with it (so the return is [IV][cipher text]).  I prefer the second ... it's a touch of security by obscurity (this isn't bad as long as it's not the only thing that you rely on ... it can be a part of a complete defense-in-depth strategy). Decrypting is very similar ... the same process (and almost the same code) in reverse.  Here's a sample with less comments (many of the comments above also apply here). public static string DecryptText(byte[] cipherText) { using (SymmetricAlgorithm alg = Rijndael.Create()) { //These will come from somewhere. alg.Key = ourKey; alg.IV = ourIV; System.IO.Stream outputStream = new System.IO.MemoryStream(); //Create the crypto stream. //This is the biggest difference between encryption and decryption. CryptoStream crypStream = new CryptoStream(outputStream, alg.CreateEncryptor(), CryptoStreamMode.Read); //Also a slightly different write because we have bytes to write. using (System.IO.BinaryWriter inputWriter = new System.IO.BinaryWriter(crypStream)) { inputWriter.Write(cipherText); inputWriter.Flush(); } //The decrypted data is now ready to read. //If we were using, say, a FileStream for the output, we wouldn't need to do this. //Create a binary reader to read it into a byte array. string clearText; using (System.IO.StringReader outputReader = new System.IO.StringReader(outputStream)) { clearText = outputReader.ReadToEnd(); } //Make sure we close the other streams. outputStream.Close(); crypStream.Close(); //... and return ... return clearText; } } Some final comments: I like to have all of the disposable objects in their own using blocks.  I didn't do that here to minimize the nesting of the using blocks for the sake of clarity.  That said, I'm a big fan of using blocks.  That's my story and I'm sticking to it. I didn't talk about key storage.  That's the stickiest part of using symmetric algorithms.  I'll deal with that in a later post. Here's a clue: DPAPI. If you notice, the encrypt and the decrypt functions are almost identical.  Yes, it is possible to have both operations in one function, with a bool indicating encryption/decryption. I, personally, like to do this.  I did not do that here for the sake of clarity and a clear separation between the two processes.  I'm sure you can look at the samples above and make that happen. You can store the byte arrays as text/string.  To do that, use this snippet: string cipherString = System.Convert.ToBase64String(data); This really is pretty easy.  It's very straightforward.  If you think it's hard, try reading the documentation for the Win32 CryptoAPI.  It's called the CryptoAPI that because it's cryptic.  It will make your brain hurt.  Badly.  I recommend a heavy dose of Advil after reading it.  You'll need it. Use one of these algorithms.  I do prefer Rijndael/AES, but any of these (even DES) is better than creating your own "crypto algorithm".  In the words of Michael Howard, that's craptography.  Just say no.  Don't do it.  Unless you are a PhD in Mathematics specializing in crypto algorithms, you'll get it wrong. Read the Rijndael article referenced above. If you can't understand it ... don't write your own algorithm.  It's just that simple.  Even if you do understand it, it's still not a good idea to write your own algorithm.  Just use Rijndael. It's been well vetted and just because the algorithm is known doesn't mean that it's less secure.  On characteric of a good algorithm is that the algorithm details can be public without compromising the security of the algorithm.