Ruminations of J.net idle rants and ramblings of a code monkey

Now with Hitachi Consulting

Idle Babbling
If you’re following me on LinkedIn, you’ve already seen this. I just joined Hitachi Consulting here in Houston as a senior architect in their Microsoft practice. It’s an exciting challenge and I’m really looking forward to the new opportunities with Hitachi. I’ll be focusing on the Microsoft platform (of course!) and, specifically, in the broader business intelligence. I’ve been putting a lot of thought lately around a “continuum of business intelligence” and where the different pieces – CEP, traditional BI and Big Data BI (Hadoop/PDW) – play in that story. I’m also continuing to work on my StreamInsight Foundation; you’ll see a new post on that before the end of the week where I’ll talk about how we’ll decouple queries and query logic from the producers and consumers of the data.

Where does StreamInsight fit?

StreamInsight | Idle Babbling
I’ve been working with StreamInsight for over 2 years now and this is one of those questions that I get all the time. Over that time, I’ve refined where I see StreamInsight fitting into an enterprise architecture … and that has included significantly expanding potential use cases. Typically, when people look at StreamInsight – or other CEP tools from other vendors – they think of monitoring sensors or financial markets. Of course, StreamInsight can do this and it’s very good at that but there’s a lot more that it can do and value that the technology can provide to the enterprise. Based on the number of forum posts and the increasing variety of users posting on the forums, it seems that others are beginning to experiment in this area as well and that adoption is picking up. So, in the past couple of months, I’ve really put a lot of thinking into where StreamInsight fits outside of the traditional use cases and wanted to share that. The Paradigm Shift StreamInsight looks at and handles data in a fundamentally different way than we, as developers, are used to. This is something that everyone getting into StreamInsight struggles with, myself included. Traditionally, we look at data that is in some kind of durable store … whether that be a file, a traditional RDBMS or an OLAP cube. We look at what has happened and was recorded for posterity. It’s stable and static. Time, in traditional data, is an attribute, a field value, that is descriptive of the data that we are looking at but not an integral dimension of the data. Time doesn’t inherently impact how our joins work, how we calculate aggregates or how we select unless we use it in our WHERE clause. It’s not a part of the SELECT or FROM clauses that actually define the shape and structure of the data set. It’s static, relative to the dataset, and references some time in the past, much like a history book’s timeline. For StreamInsight, it’s very different. In SI, time is an integral dimension of the data, a part of the FROM clause that we are familiar with. You don’t specify this in any of your LINQ queries but it’s there, an invisible dimension that impacts and affects everything that you do. It’s also the thing that’s the hardest for developers to get their heads around because it is so radically different. Many of the query-focused questions on the forums deal with trying to understand how all of this temporality works and how timelines, CTIs and temporal headers interact with events and queries. The things that this allows you to do are difficult in traditional systems. Certainly, they can be done but not without a TON of code that navigates back and forth, keeping track of time attributes and processing in a loop. Even WINDOW functions don’t come close (I’ve been asked this) and, while they may provide some capabilities to do things like running averages, doing something like “calculating the 30 minute rolling average every 5 seconds” – which is very easy in StreamInsight – is pretty difficult to accomplish. Native and inherent understanding of the order of events (previous vs. current vs. next) or holes in the data is also difficult – that’s going back to the cursoring and order by clauses with a whole lot of looping in the mix as well. Yet, with StreamInsight’s temporal characteristics, these things are relatively simple to do. More sophisticated things, like deadbands and rate of change, are even more difficult with traditional data stores but absolutely doable in StreamInsight with extension points like user-defined operators and aggregates. One comparison I like to make is to talk about driving. The traditional data paradigm would have you driving with a digital camera and taking a picture every x amount of time and then using the display of the picture to navigate and drive. Could you actually do this? Maybe. Probably, if you were good and careful, your camera was fast enough, traffic wasn’t heavy and people actually drove intelligently. But you’d miss a whole lot of things that happen in between snapshots, you’d have a more difficult time understanding where things are going and there’d be a latency in your reaction time. StreamInsight, however, is more similar to how we actually drive and take in our surroundings … our senses provide our brains with a continuous stream of information in a temporal context. We are constantly evaluating the road, other vehicles, their relationship to our current position and where we are going. StreamInsight does similar things with data, though not quite as efficiently as we do without even thinking about it. Our brains are, essentially, a massively parallel CEP system on steroids. Beyond that, StreamInsight’s understanding of time isn’t necessarily tied to the system clock, another thing that took me some time to get my head wrapped around. Instead, the clock is controlled and moved forward by the application, independent of the system clock. This allows use cases where you can use the temporal capabilities to analyze stored data – essentially replaying the dataset on super-fast-forward. An example of this was a POC that we did for a customer. They had a set of recorded sensor data, with readings for 175 sensors every 5 minutes, that represented about 3 months of data before and shortly after an equipment failure event. They gave us the data and some information about the equipment involved and asked us to find patterns that were predictive of an impending failure. Analyzing the dataset using traditional SQL queries got us nowhere … but when we starting doing some (relatively) basic analysis by running the dataset through StreamInsight, several of the patterns quickly became apparent. In doing this, we used the original timestamps but enqueued the data every 50ms – so 50ms of real-world time equaled 5 minutes of application time. In doing this, four months of data could be compressed down to less than a half hour to process. Now, if we had more powerful laptops than the dual-core i7’s with 8 GB of RAM that we were using at the time, we could have done it even faster. Our real limitation wound up being the disk – we were reading the data from a locally installed Sql Server instance and writing to local CSV files to look at the results in Excel. In the end, we were able to determine that, by looking at 2 different values and their relative rates-of-change and variability over a specific time period, we could eliminate false alerts for things like equipment shutdown and pick up the impending equipment failure about a month before it actually happened. If we had a better understand of the physics and engineering involved, we could probably have increased the warning time – but that wasn’t too bad for a couple of developers that didn’t have the full specs of the equipment, very little (or no) engineering background, basically shooting in the dark. In tests, we’ve pushed about 30,000 events per second – randomly generated and without any analytics – through StreamInsight on our laptops and over 100,000 events/second (with analytics, remote Sql Server data source) on a commodity server-class machine (dual quad-core XEON with 24 GB RAM) with an average of 35% CPU utilization. The Three V’s – Big Data “Big Data” is a very hot topic these days. Everyone’s all excited about the new capabilities provided by technologies like Hadoop/MapReduce and Massively Parallel Processing (MPP) and with good reason. These are ground-breaking technologies that allow us to more effectively get information from large amounts of data. But these are still technologies in the traditional paradigm of data – capture, store, retrieve and process. There is a latency involved with this that simply can’t be overcome due to the store/retrieve part of the cycle. No matter how fast the capture and process steps are, the disk is the bottleneck of the system. While SSD’s reduce this latency, they can only do so much and are still the slowest part of the entire system. When talking about Big Data, the “three V’s” often come up … Velocity, Volume and Variety. Hadoop and MPP deal – very well – with the massive volumes of data and Hadoop adds capabilities around variety. But they have trouble – because of the paradigm – with velocity – or the frequency with which data is generated and captured. And, let’s face it, velocity is a critical piece these days. Ten years ago, we talked about “moving at Internet speed” and the agility that the fast past of change required businesses to have. Today, what we used to call “Internet speed” seems a snail’s pace. We’ve even coined new terms to describe it; “going viral” comes immediately to mind. I’ve come to call it “moving at Twitterspeed” and enterprises need to become even more agile to keep up, especially when it comes to marketing. The impact of social media – particularly Facebook and Twitter – has really driven this fundamental change in the market and companies have, more than once, found themselves completely blindsided by viral explosions across Facebook and Twitter. This velocity, coupled with an understanding of increasing or decreasing velocity, combined with the sheer volume is becoming a critical business capability that companies are struggling with and, in some cases, failing spectacularly. With StreamInsight, handling the velocity of “Twitterspeed” and understanding how things are trending is absolutely do-able. Imagine a corporate marketing department being able to hook into Twitter and other social media streams, analyzing for specific keywords (or hashtags) and highlighting increasing (or decreasing) trends in these keywords … as they are happening. Within minutes, they can then begin to get on top of trends as they are just beginning to “go viral” and formulate an intelligent, coherent response while there’s still time to get ahead of it. It used to be that these trends weren’t readily apparent for days or weeks but now it’s down to hours or minutes when things are going at Twitterspeed. Now, add in geo-location analytics and customers can begin to understand not only what is going on, but where. From here, we can get into more effective and meaningful targeting of marketing messages that may have relevance in one area but not another. Outside of social media, we also have the increasing interest in the “Internet of Things” – smart devices that capture and report data. These have the potential to take both volume and velocity to a whole new level that makes even “Twitterspeed” look sluggish. Even now, as the IoT is in its earliest stages, there are billions of devices participating, ranging from the smart phones that we carry with us everywhere to RFID, smart meters, smart roads and other device sensors to shoes and wristbands and everything in between. We are just entering an age of truly ubiquitous computing and connectivity, allowing us to capture data from a broad range of sources, both traditional and non-traditional. In many of these cases, even if the velocity isn’t fast, the volume is simply mind-boggling. With an estimated 8 billion or so connected devices today, volumes get very big, very fast, even if they aren’t changing rapidly. And the number of these devices is increasing exponentially, with a projected 50 billion devices by 2020. StreamInsight is designed to handle both volume and velocity. Because it doesn’t require storage of data but, instead, does analytics in memory, it’s bound by CPU and memory speeds, not by disk. As a result, it can handle data velocity and volume that would simply overwhelm disk-oriented systems. This is especially the case when data is required to be continuously updated and analyzed … to do this with traditional technologies you have to poll the data store. You’ll have to be really careful doing this because you’ll very quickly overwhelm the system. But because StreamInsight pushes all the way through, polling … and the latency and scalability issues associated with it … isn’t a significant problem (unless that’s how you get your source data but that’s a completely different issue). You will, however, want to downsample the data before you send it to a sink/output adapter and in the vast majority of cases, this is actually desirable. Storing every piece of data from the Internet of Things is, quite simply, cost-prohibitive from a storage perspective. That brings us to the third “V” – Variety. This is something that is a mixed story with StreamInsight. Individual input sources must have a strongly-typed schema; this is your payload. This limits what you can do with the unstructured data that is becoming more prevalent these days. That said, StreamInsight is very good at bringing together multiple (strongly-typed) sources, synchronizing them within the application timeline and then performing analytics across all of them within a temporal context. Take, for example, real-time web server analytics (I’m doing a presentation on this at Sql Saturday Baton Rouge, by the way). One one hand, you have performance counters – we’ve done a good job with these and there’s tools a-plenty to monitor them. But how do they relate to the executing pages? What pages, what parameters, are executing when the CPU spikes? Are there specific pages that take too long to execute and wind up causing our requests to queue? This requires not only perfmon counters but also some hooks into the ASP.NET pipeline. From there, StreamInsight can take these two very different (and differently structured) data sources and merge them together, synchronizing in time. BUT … the individual data sources are still highly structured. Bringing it Together Don’t take any of this to mean that I’m discounting traditional data paradigms. They are … and always will be … very important. They provide a view into the past that CEP technologies (like StreamInsight) just won’t be able to do – and really aren’t the right tools for anyway. And these traditional paradigms, with their historical information, can and should be used as “reference data” (or metadata) that further inform real-time analytics. It’s the old axiom … to understand the present, you also need to understand the past and how it relates to the present. So it’s not an either-or discussion but how these technologies fit into the continuum of data and analytics. There’s a lot of focus on Big Data from a traditional paradigm but there’s also a significant amount of value to be found in the data that’s on its way to the storage, at the capture point of the process. Downsampling at this stage can also optimize storage costs and overall read performance from Big Data stores as well as providing analytics in near-real-time. StreamInsight expands our capabilities for business intelligence and shortens the timeframe for getting actionable information from the volume of rapidly changing data that is becoming increasingly important – even critical – to business faced with things moving at Twitterspeed.

Passing the Technical Interview: Some Tips

Idle Babbling
I’ve been involved in doing technical interviews for … oh … I guess about the past 15 or so years. In my current role, I do quite a few and, while I don’t have the final hiring decision, I can completely nix the entire process for a candidate. In this post I’m going to share some tips to pass a technical interview because of how many technical interviews I do that don’t turn out well … for the interviewee. When I’ve talked with other senior level people in the industry, I’ve also discovered that this is, sadly, more the rule than the exception. While it does lead me to wonder how some of these people are actually getting jobs … it also scares me because they are actually getting jobs writing code. Or … well … maybe it’s that I hold my team to a higher standard than much of the rest of the industry. Do Your Research You are being interviewed to join a team of technologists … so it is in your best interest to do some research on the team and the position that you are being interviewed for! If you are working with a recruiter, ask the recruiter for details and ask the recruiter if you can get an email from the interviewer or the team that provides some more information. When you get that information, research it! Look up, at least at a high level, things that you don’t know or aren’t familiar with. The vast majority of interviews that I have for my team don’t show any pre-interview initiative, even when they are told that they would be interviewing for a StreamInsight-focused position. I have actually had candidates tell me that they were being interviewed for a position that involves Microsoft StreamInsight but they hadn’t heard of it before. OK … fair enough, I can accept that … but I’ll follow up with “Well, did you look it up?” C’mon folks … search engines work very well for these kinds of tasks. I do not expect someone to be an expert in StreamInsight but you will win major brownie-points for having taken 30 minutes of your time to do a quick overview. Sadly, far too often I am told that they didn’t look into the technology before the interview. (OK … so I guess you don’t want the job, do you?) They get more brownie points for mentioning my blog and discussing something that I’ve actually posted. Read – know your interviewer! Beyond the technology, do your research on the company itself! This is pretty basic and doesn’t just apply to technical interviews but to all kinds of interviews. Show some interest in what the company does and their history. Provide some indication that you actually want the job and not that you are just there to collect your next paycheck. Yes, we are all at our jobs to collect paychecks but I want people on my team that actually want to be there for more than the money. My philosophy – for a long time – as been to follow your heart, your passion, your bliss … the money will follow. It has worked very well for me and, more importantly, I’ve been happy at my various jobs. It may sound corny but that doesn’t make it any less true – money cannot buy happiness. Ask Questions I love it when an interviewee asks questions, especially considering the topic above. There are few things better than a candidate that’s done a little research and has questions … it shows that they have a passion for the technology and that is someone that I’m actually interested in hiring. I will start the interview with a brief description of the project, the team, the technology and the position … which is a perfect opening for additional questions. At the end of the interview, I will also make a point to ask the candidate directly if they have any questions. And do not, under any circumstance, ask “How did I do on the interview?” Yes, I’ve actually had people ask that but then, of course, they already know that they didn’t do very well on the interview so it was pointless to ask. Regardless, it’s just poor form. It shows that you weren’t prepared and that you know that you didn’t do well but are, perhaps, hoping for some mercy. Sorry … if I recommend someone for hire to my team, that person reflects directly on me. Don’t take it personally if I’m not merciful. I want to hire the very best people that I can so that they can make me look good to my boss. And … guess what … every interviewer is going to feel the same way. Say “I don’t know” There’s nothing to be scared of here. It happens; it’s reality. No one in this industry can possibly know everything, not even your interviewer. And your tech interviewer is going to know this. So don’t be afraid to say that you don’t know the answer to a question but also be prepared to follow up with two things: 1) what you infer from what you already know and 2) how you would figure the answer out. I’ll tell you right now … how you deal with the follow up is FAR more important than what you don’t know (assuming that it’s not something so basic that it’s trivial … you better know how to declare a variable in C#). Rather than saying “I don’t know”, I often hear candidates stutter, stammer, stop and otherwise embarrass themselves because they won’t actually admit that they don’t know something. Believe me, saying you don’t know is not only easier and less embarrassing, but opens up new avenues for you to impress me. I’ll also occasionally get a candidate that likes to spew a load of BS rather than admitting that they don’t know … and this is FAR, FAR, FAR worse. I had a button that a friend gave me in college that said “If you can’t dazzle them with brilliance, baffle them with bullshit.” That, however, doesn’t work in a technical interview so get it out of your head. The developer that thinks that they know everything and that insists that they know everything when they don’t is a very dangerous developer indeed. I’ve heard stories of interviews where the interviewers simply made stuff up to mess with someone … and the interviewee happily played right along by claiming to know – and have worked with – this made-up technology. One thing that I do try to do with an interview is to push the candidate to the edge of their knowledge to a) see how much they really know and b) get them to admit that they don’t know. Yes, that’s right … I’m looking for an “I don’t know”. And from all of the discussions that I’ve had with other senior-level technical interviewers, they are also. Be Honest on your Resume Do not, under any circumstance, put buzz words on your resume just because you think it’ll impress someone. There is a very good chance that you will get questions about things on your resume and you need to be able to answer those questions. The most common example that I’ve seen of this lately is “Design Patterns”. I can tell you right now that if I see that on your resume, I’m going to ask. In fact, if I don’t see it, I’ll still ask but you won’t lose points if you can’t answer. But just about everyone puts that on their resume these days and very few can actually name some common patterns. Even when given some common patterns, some have difficulty with explaining those patterns. I asked one interviewee who claimed to be “deeply familiar with design patterns” and a Senior Architect, “What patterns have you used?” He said that he had used some pattern or another (I forget which) in a recent project. OK … so, why did you choose that pattern? What problems did it solve? Were there any potential downsides? His response … “I don’t know. The architect on the project chose that and I just implemented it.” Wrong answer … it would have been far better if you had excluded design patterns from your resume rather than lie. Now, I use design patterns as an example simply because it’s just so common; it’s probably the most frequent “stretching of the truth” that I regularly see. But it’s not the only one. Keep in mind that there is a reason that you’ve been asked to the technical interview and it probably has something to do with the skills that you list on your resume. Assume, when writing your resume, that you will be interviewed by an expert in every technology that you mention. If you don’t have those skills … if you can’t answer questions about those things intelligently … it is better to just leave them off. Avoid the “And one time, at band camp” Answers A big part of a technical interview is what you know and how you think. Yes, we will cover previous projects but I’m more interested in the how’s and why’s … how did you solve particular challenges? Why did you choose technology A over technology B? What was the good, the bad and the ugly. Look … I’ve been in the software development business for a long time. I know that there is no such thing as a perfect project or technology. And I’ve certainly had my share of projects that were “challenged” for one reason or another. I don’t expect every project that you’ve done to be perfect but I do look for what you learned from it and how you solved problems. Yet, too often, I get what we (my team) call the “one time at band camp” response … “I was on project XYZ and we used technology ABC”. Thank you, I can read that on your resume. Can we drill down a little bit please? And often I don’t get anymore than “the client chose that” … OK, that’s fine, I get that … what was your opinion of it? What was the good and the bad? What challenges did it solve and cause? A great deal of software development is analysis. What are the requirements? How should I attack them? What technologies do I have available? What questions do I need to ask to more deeply understand the problem? And I’m looking for analytical skills and the ability to learn and grow from that analysis. I’m not looking for a rehashing of something that I can already read on your resume. If that’s all that you are capable of doing then you aren’t going to be someone that I’m looking for … unless it’s a junior role where I know that you’ll be under a more senior and seasoned technologist that can do that and can make sure that you do what you need to do. I can tell you, in no uncertain terms, that “one time at band camp” interviews don’t lead to a job. Have a Passion for Technology Let’s face it … tech is a fast-paced, rapidly changing and dynamic industry and it’s not slowing down. If anything, the pace of change is speeding up. So … you have to love it. You need to have a fire for it. What you know and do today will not be the same that you know and do tomorrow … that’s just reality. If you don’t love what you’re doing, you’ll too quickly get left behind and become an albatross on the necks of your fellow teammates as they will have to pick up the slack for your lack of passion. I, for one, like to challenge the members of my team to push into new areas, get outside of their comfort zone and take on new challenges. And you have to love it to thrive in that kind of environment. Also, a real passion for technology drives you to look for new and better ways to solve problems … not just repeating the same things that you’ve been doing over and over and over again for the past x number of years. It drives innovation. I look for candidates with that kind of drive … they’ll also be the ones that take it on themselves to challenge my own assumptions and/or ways of doing things. I welcome that … I want the people on my team to do that; I want them to challenge me. It helps me, personally, grow and learn more. Beyond all of that, I don’t want to hire someone that’s going to come to work and simply go through the motions to do their job. It’s bad for the entire team’s morale when even one team member is like that. From a team lead perspective, I also know that these people will be far less productive and, typically, have lower quality code because they are simply doing it for a paycheck, not because they enjoy it. So make sure that you show that you love technology. Talk about the blogs that you read, cool stuff that you’ve done, whatever. Show some fire, show some spark. That passion can take you a long, long way. No passion == no job. That’s what I’ve got right now. I may be adding to this (as a series) later on with other thoughts and comments … I have many. And no, I don’t ask “Why are manholes round?” Everyone knows the answer to that one already.

Why I will always have my 39th birthday …

Idle Babbling | StreamInsight
I’ve discovered that StreamInsight holds the key to eternal “youth”! var newLifeStream = from e in lifeStream .AlterEventLifetime( //Events greater than 39 years ago get moved forward a year e => e.StartTime.AddYears(39) >= DateTime.Now ? e.StartTime.AddYears(1) : e.StartTime, e => e.EndTime - e.StartTime ) It may need a little tweaking, particularly the event duration, but I still have two and a half months before “go-live.”

Changing Reading as we know it

Idle Babbling
Let me start with some background as this helps put this in context. I love to read. I will read almost anything and there are few books that I will abandon mid-read. I read all kinds of things … fantasy, sci-fi, history, non-fiction, classics, mysteries, suspense, horror, philosophy … you name it … with the very notable exception of things like Harlequin romance novels (I do have my standards). To be honest, I tend not to read books so much as devour them. I’ve always loved to read, for as long as I can remember. Even as a kid, I almost always had a book in my hand. In college, many years ago, I was an English Lit major with a Philosophy minor. More reading … a lot more reading … and I learned to do it faster while still retaining what I read. And I still loved it. Later in life, I took a speed reading class and that simply built on and expanding upon what I already had learned by necessity. While I don’t often use the speed reading techniques for my “enjoyment reading”, it is very useful for professional reading and I still have a high retention rate while speed reading. Most of what I read these days is simply for enjoyment and relaxation, so speed reading is actually counterproductive to that. I capitalized “reading” in this title for a reason. It’s a key activity for transferring knowledge, something that has become even more important in the Internet age. But it’s more than transferring knowledge … it runs deeper than that, reaches into our individual and collective imagination, extends our mind and imagination in ways that are uniquely human. Yes, the devices and the medium have changed … but you still must read. And, like with everything else, practice makes perfect … the more you read, the better you are at doing it. Reading necessarily and deeply involves interpretation … you can never understand “author’s intent” but only attempt to understand, explain and analyze your interpretation of the resulting text; literature is a more intensely personal art than most realize. I’ll stop there and resist the temptation to delve further into linguistic theory and opinion. There is something that brought this on. My wife got me the mostest bestest gift that I can imagine right now. I am difficult to buy for so I had to give her hints and stuff. What was it? A Kindle 3. I looked over and watched (carefully) the development of eBook readers from Amazon, Sony and Barnes & Noble. I played with devices in the stores and that friends owned. I looked at the number of available titles. The “flash” as pages changed annoyed me… I read “faster than the average bear” and that flash actually slowed me down as I had to wait for it, which annoyed me. Then came the Kindle 3 … promising, most of all, to reduce the “flash”. I played with the original Kindle and the Nook … I’ve yet to see the Kindle 2 but, from what I’ve heard, the “flash” isn’t much better than with the Kindle 1 or the Nook. When I played with the Nook at a BN shop, I asked the staff about the “flash”. “I’ve gotten used to it” or “I don’t notice it” was the response. I got a similar response from Kindle 1 users. I did notice it and it got on my nerves … quickly. But then, again, I read faster than the average bear. But the Kindle 3 did live up to the promise … the flashing is still there but it’s a LOT faster. When I’m reading for enjoyment, it takes about the same amount of time that it takes me to move my eyes from the bottom of the page to the top of a new page. If I’m speed-reading, it will still slow me down but, since most of my reading is for enjoyment, that’s a minor issue. But there is more than speed … and, while, Kindle 3 does fit well with my “enjoyment” reading speed, that’s not the only factor. I’m an IT guy and IT books tend to be a) heavy and b) costly. With the Kindle, these are stored on the device and, from what I’ve seen so far, about half the price of the physical books. This makes it a lot easier to “carry” and access technical reference books than we have traditionally known – with full searchability as an added bonus – at a lower cost. One device … the size and weight of a standard paperback … loaded with all the reference books that you need … and that sync’s with your PC so that you have the same reference books everywhere … well, that’s pretty damn cool. This technology is a game-changer. In many ways, I’m a stodgy old fart when it comes to books … I do love the smell and feel of paper and that will never change. But the future of Reading is on these kinds of devices. Paper won’t go away but it will be marginalized – I, personally, have thousands of books with only a hundred (at most) that I would buy physically – and mostly these are old, early editions (I have one of the first printings of Nietzsche in English; Kindle just can’t compete with that). I just downloaded Chaucer’s Canterbury Tales in the original Middle English (yes, I can read and understand Middle English; taking both French and German helped A LOT)  with footnotes for $0.99 … and the book when I took the class in college was $100+ … used (but it is a pretty, pretty book and not one that I’ll ever let go). I actually want to have both … but I do realize that I am the exception rather than the rule. Ironically (or not), my professor for that class was the only prof that I had that would accept papers in digital form via email though I’m sure that’s much more common now that it was in 1992-ish. I can see my Chaucer professor (Arnie Sanders at Goucher College) assigning a set of Kindle locations rather than (or in addition to) pages. Back then, he was, IMHO, on the forefront of technology and literature … we even talked, at one point in time, about mixing HyperCard technology with Reader Response theory to create a truly interactive text that was a creative effort between the reader and the writer. He’s the same professor that introduced me to Neil Stephenson’s Snow Crash, ironic considering where my career took me. But I digress. This technology is a game-changer. I’ll say it again and I’ll say it over and over. It represents the the publishing medium of tomorrow. As a society/culture, we are moving (quickly) into a purely digital format and book publication has been lagging even further behind than music and video publishing. I carry my Kindle around everywhere now – it’s small, light and convenient. I also have the Kindle software on all of my PC's … this allows me to access all of the computer reference books that I have on the Kindle easily from anywhere and everywhere. Access is ubiquitous. It’s almost as good my music and video collection. With only my Kindle, I can download any book that is available from the Amazon Kindle store at any time, regardless of where I am. Besides the obvious “immediate gratification” aspect, this is moving the “Information Age” further … making information (i.e. books) available anywhere, anytime … and on any device (didn’t Microsoft say something about that???). As the “Information Age” generation grows up, they will expect more and more of this. The Kindle won’t be the end-all and be-all … there will need to be (and will be) a movement towards standardization of DRM on these devices that will further expand the platform. Most content cannot be exchanged between the Sony Reader, the Kindle and the Nook and this will need to change. As these devices and the platform matures, I have little doubt that it will change – the market will demand it. If the publishers want someone like me to by a physical copy, you’ll need to include a digital copy as well. The movie studios have figured this out. Print publishers will need to do the same thing, if only to compete. The Kindle is the device that I currently have in my hands and, quite honestly, I love it. Really, I do, I love it. In a little over a week, it has completely changed how I read. And as I use it, I see the future of this technology and where it can go. There will be very, very few books that I purchase in “analog” format. And most of those I do buy will have been printed before things like the Internet existed. Or include a “digital” version. After all, I won’t purchase a movie without a digital version and I don’t see that changing regardless of medium. I love to read. My Kindle makes it easier and more enjoyable to read. Therefore, I love my Kindle. That’s all there is to it. {Sidebar: Microsoft had similar technology years ago in Microsoft Reader (c. 2000). This was the initial platform for ClearType. And while ClearType is now standard and expected across Microsoft operating systes, the Microsoft Reader concept was abandoned. They never took the leap to a dedicated reading device but insisted that it be a part of the Windows platform from core OS to Windows Mobile. Yet another example where Microsoft’s marketing didn’t capitalize on their technology and allowed others to take the lead and pushed Microsoft into the background.}

“God Mode” in Windows 7

Idle Babbling
Now, I don’t know about you, but when I see “God Mode”, I think of running around in some shooter killing all the bad guys/monsters/aliens/whatever without any worry of damage or harm coming to me. Those of you that cut your teeth on some of the old school DOS-based games of yester-century (a la Doom) will know exactly what I’m talking about. I don’t think of “God Mode” when it comes to an operating system. Unless, perhaps, running everything as admin. Really, though, that’s probably more akin to anti-God Mode if you think about it; you need to be a little more worried about potential damage and hazards since you are running as admin. But then there’s Windows 7 and we have things like User Account Control to keep you from doing too much harm. So is “God Mode” some super-anti-UAC thingie that makes it like it was in the old days when you could call deltree from the root of your C: drive and it would happily go about doing it? (Note: Yes, I know someone that did this. No, it wasn’t me. And no, I don’t recommend trying it.) Nope, not at all like the God Mode that I knew and loved from the days of Doom. So when I saw an article called “Understanding Windows 7's 'GodMode'”, I was intrigued. It’s not the “God Mode” of Doom-yore, but it is interesting nonetheless and I’m kinda liking it. You create a folder and name it “GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}” (no quotes, of course). Then … behold! the icon changes and you have access to a ton of controls and settings for the operating system. It does work on Windows Server 2008 R2 x64, which is what I’m running right now. It reminds me, in a way, of the Windows 95 Product Team Easter Egg. But simpler to create and a lot more useful.

What is a software architect?

Idle Babbling
I had this discussion with someone recently and it’s really gotten me to thinking. There is a reason that I don't use that term much to describe myself ... it is so undefined, overused and inappropriately used that, in many ways, it has lost much of its value as a title (to me at least). I cannot even begin to count how many folks I've come across that promote themselves as architects but that I certainly wouldn't call an architect. For some, it's because they don't have any technical depth but are really good at regurgitating marketing material and BS. And because they have no technical depth, they often come up with "architectures" that are very difficult to implement properly because they have no idea what it takes to actually make it happen in the real world. Unless, of course, you write marketing material. For others, they are good coders and, perhaps, could be considered a solution architect (or lead developer) ... but they don't have the vision to see outside of a narrow problem or the short term. For still others, it's a political play in the rough and tumble world of corporate politics. Finally, you have those that are super-smart and in love with building things as complex as possible, using (what they perceive to be) all the latest and greatest tools and toys. If there is a technology that they want to play with, they will make it fit into the project, regardless of whether it actually adds any real value. Yes, there are some that I would truly consider software architects. They are a rare breed ... they have deep technical knowledge and skills; they can code in the trenches with some of the best developers. But they have something more. First, they understand deeply business priorities and can weight technology choices to meet those business priorities. It's often very hard to get developers to understand this and vocalize it; it was a challenge anytime I did an Architectural Design Session. Yes, they will look at the latest and greatest ... but only adopt it when it actually makes business sense to do so. They also understand that you need to balance priorities and have to make certain trade-offs based on those priorities - for example, sacrificing a little performance in return for higher productivity and quicker turn-around time. Or the other way; it depends on the business priorities. For example, the guys that build Dell.com have very different priorities than the guys that do internal web apps and they will (well, should) make different decisions based on those priorities. Second, they have a "Big Picture" view. They know how the different moving pieces of any even moderately complex software system (should) fit together. I remember once, at an internal Microsoft training event on software architecture, a talk by the original architect of Microsoft Transaction Server (MTS) - which has morphed into different names over the years but is still a core part of Windows. He said that architecture is all about "HST" ... "hooking shit together". In the software world, it very much is. Third, they take the long-term, practical view of development ... not just what we need to do today, but where we are going tomorrow. This is always in flux, but it is a core piece of how they look at the world. Finally, they are also pragmatic ... what can be done within the constraints that we have and with the resources that we have? They know that all things are possible, given enough time, money and resources. It may take building a new operating system or web server or middleware piece from the ground up, but it would be possible. Just not very pragmatic. Unless, of course, that happens to be your business. I have met folks that meet the above description but there are very, very few of them. I've met far more that fit into the first paragraph. And no, I will not name any names.

Final (?) Comments on Windows Server 2008 R2 as a desktop

Idle Babbling
I know … I keep bringing this up. It’s been a long road and there were still a couple of things that I found that I needed to really, truly, fully replace Vista/Windows 7 client with Windows Server 2008 R2 for my desktop OS … on both my traditional “desktop” machine and my laptop. I think, finally, I’ve got all of them worked out. Power Management/Sleep/Hibernate Mode: I absolutely love sleep mode. I see no need to keep my machine running at 100% power all of the time. And I’m impatient so I don’t like to wait for a full reboot if I don’t have to. I don’t do hibernate too much but that’s also nice to have. As I’m sure you are aware, Windows Server has no problem with the whole power management stuff … until you enable the Hyper-V role (which is one of the biggest reasons that I want to run Server 2008). Once you enable Hyper-V, you lose all power management capabilities. In Windows Server 2008, there was nothing you could do about this. When folks raised this as an issue, Microsoft’s response was … tough. Hyper-V is supposed to be on a server and a server never sleeps. It doesn’t matter if you have VM’s running or not either. A lot of folks came up with workarounds/hacks that “enabled” this, with various degrees of success. Well, apparently there was enough of a hubbub for the Microsoft folks to do something about it. You’ll need to create a new boot entry with BCDEdit and set hypervisorlaunchtype to off. Full details and step-by-step instructions are on Virtual PC Guy’s WebLog. You will have to reboot to re-enable Hyper-V (and the hypervisor) but that’s OK for me … I don’t always run VM’s and I’ll accept the reboot for that. It’s not my ideal scenario, but it works. Zune: This sucked. I couldn’t get the Zune software to install for anything. Improper version or some such nonsense. Which meant that I couldn’t access my Zune pass and couldn’t sync with my Zune unless I dual booted. Apparently, the Zune folks don’t think that Windows Server is an appropriate platform for Zune. Fortunately, I found a post on David Zazzo’s blog that takes you through doing this step-by-step. One note: I right-clicked on packages\Zune-x64.msi and clicked on “Troubleshoot Compatability” … which applied the settings “Skip Version Check”. Just running ZuneSetup.exe … even in compatibility mode … didn’t work.

More on Windows Server 2008 R2 as a desktop

Idle Babbling
Since I did the last post on this, I’ve also (now) installed Server 2008 R2 on my personal desktop … as my laptop had to be turned in. In doing this and getting it set up to be a day-to-day desktop OS (as opposed to a demo machine OS), I ran across a couple of other things that I thought were worth noting. IE ESC: That’s Enhanced Security Configuration … the ultra-secure-because-it’s-only-HTML mode of Internet Explorer that is enabled by default on Windows Server. Again, something that makes a TON of sense but it doesn’t work very well when you are using it as a desktop. I had thought (silly me) that it’d be easy … go into the Server Manager and turn it off. Well, there were complications. Here’s the deal: I run with a different account than the built-in Administrator account. It’s also the account that ties my machine to my Windows Home Server (which is way cool, btw). When I created the account, I did not initially add it to the Administrators group. So, when I turned IE ESC off for Admins, it didn’t turn off for that account … because it wasn’t an admin. Easy enough … I turned off IE ESC for all users. Nope. Didn’t work. Added my account to the Administrators group. And it still didn’t work … I was still running IE in the Enhanced Security mode. Even after rebooting. I went to “User Accounts” in Control Panel (it’s just like on the desktop version) and couldn’t add that account as an Administrator account there either. So … I wound up deleting the account and recreating the account using the "User Accounts” applet in Control Panel, creating it as an administrator account. Then it worked. Just fine. I don’t know why this happened. I cannot explain it at all. But there it is. Windows 7 Themes: I did turn on the themes and eye candy as mentioned previously. But the Win7 themes aren’t included and I couldn’t find a way to install them. Easy enough … copy them from a Windows 7 installation. They will be under %WINDIR%\Resources\Themes. You’ll also want to copy the pictures (%WINDIR%\Web\Wallpaper) and the cursors (%WINDIR%\Cursors). They will then appear in your personalization window. Windows Search: This one is important for finding stuff in Outlook and on your drives in a reasonable amount of time. It is not installed by default in Windows Server … and Outlook will tell you all about it and the necessity of installing it if you want to do any searching. You cannot find it in Features. There’s a download for Windows Search 4.0 for Vista … that doesn’t work either (refuses to install). Where is it? It is under Roles …File Services … Windows Search. Perfectly logical right? So there it is. I’ll post any more tidbits as I happen across them. So far, though, all is well and happy. 

Bikers, Geeks and Community

Idle Babbling | Community
When motorcyclists pass each other in opposite directions, they wave at each other. Watch them sometime; you’ll see this happen. A lot of non-motorcyclists (we call them “cagers”) don’t notice this until it’s pointed out but you’ll see it if you look for it. It doesn’t matter if you are riding a crotch rocket or a Harley, a Goldwing or a dual-sport, if you are suited up in all leather and a helmet or are riding with no gear at all, bikers will still wave. If a motorcyclist sees another biker stopped on the side of the road, they will usually stop to check and see if they are OK. That’s just how it is. When commuting, bikers will also sometimes join each other in traffic and ride together for a time as their commute allows. Again, you’ll see this. But I’d bet you never even considered that those two bikers didn’t know each other. There are also biker-specific forums – I’m on Two Wheeled Texans – that all kinds of bikers participate in. There are also group rides; random people hooking up just to ride together. Some are random groups from the boards, some more “organized”. For example, TWT has a monthly “Pie Run” to a small restaurant in a small town in Texas and there will be anywhere from 80 – 250 bikers show up, on ALL kinds of bikes from ALL over Texas. I even saw someone at one of the Pie Runs on a vintage 1943 Army issue Harley! Bikers will also get together for a “Bike Night”. As the name implies, it’s an evening for bikes and bikers to hang out together at a local restaurant/ice cream shop/parking lot/whatever. I can often be found at “Katy Bike Night” on Wednesdays, munching on empanadas with anywhere from 3 to 20 fellow TWT’ers. There is a strong sense of community among motorcyclists that is built on a common, shared experience … namely riding a motorcycle. We share a love for riding, feeling the wind blowing over us. We also share common dangers and risks - for the most part, “cagers” are the greatest risk but that’s not the only one (think … weather … a 45 MPH crosswind is absolutely, positively NOT FUN). Sure, we have our differences – every group does - but the sense of community is stronger than that. Yes, there are some individual exceptions to this but, as a rule, that’s how it is. And those that get snobby about their “group” are considered rude at best. And I won’t even mention “squids”. Why do I mention this? Well … it’s that community thing. I’ve been involved in the developer community for some six years now and the biker community for about 2 years. I can tell you, the biker community is much stronger and, even more importantly, much more inclusive. In the developer community, there is – and let’s be honest here – a huge wall separating technologists with different specializations. Java guys don’t talk to .NET folks and they don’t talk to PHP folks. Linux folks don’t talk to Microsoft folks. Sure, there are exceptions here and there but the rule is different; we don’t intermingle. Do you know of any boards online where you have PHP and .NET and Java folks all mixin’ it up together in harmony? I certainly don’t. Even boards that cater to all types of technologists will have different forums where techies of like technologies congregate, with very little interaction between the groups. We tend to get wrapped up in our own areas of technology and look at technologists in other areas with wariness at best. Certainly one difference is competition … if Java is chosen as a technology at a given company, the .NET folks will be looking for work. And, again, vice-versa. That’s not the case with motorcyclists – it has no impact on my life if a fellow biker buys a new Ninja or a new Goldwing … I can appreciate both and it has no bearing at all on my ability to provide for my family (even if you won’t get me caught dead on a Goldwing!). But there’s something more than that – overall, there also seems to be little interaction between infrastructure/network folks and developer types even in the same technology area. When you think about it, it’s actually quite silly. Yes, there is that competition but I can’t see why we can’t be more like the motorcyclist community … inclusive and sharing what we have in common (which is quite a bit) rather than focusing solely on our differences. All of us have a love for technology and we all have the same gripes and issues with end users, customers, managers and the like. Regardless of our technology, there is much that we can share and much that we can learn from each other. Even if that’s only an appreciation for other technologies. I think it’ll be interesting to walk into a PHP user group. I’d bet that they are little different from the .NET user groups that I go to. I won’t say anything. Well, I’ll try not to say anything or too much at least. Not there to convert them, spy on them or any other such nonsense. Just getting a feel. Who knows … maybe I could persuade one or two to see what a .NET user group is like. And get them cross-pollinating with .NET folks going to PHP user groups. It won’t be the end of the world by any stretch of the imagination. But it certainly make the community much more interesting. And maybe … just maybe … we’ll take a step towards breaking down these silly walls that would divide us. We’ll see …