So a recent item that I've been tasked with is programmatically finding the application that is required to open/load a file of a given file extension. One would think that this would be a straightforward task and quite easy to accomplish with .Net.
My first thought was to check the System.Environment namespace for any mappings of file extensions. Nada. My next instinct was to check System.Runtime.InteropServices to see if there was a chance that some object (maybe Marshal) had access to such a table. No go. I also stopped by Microsoft.Win32...nothing as well.
I did come across an interesting method in System.Drawing.Icon, ExtractAssociatedIcon() which would be useful if that was what I was after. Unfortunately, I didn't find anything that would resemble this functionlity in System.IO.
So it would seem that for the time being, the only way that I can get at this data is to read registry keys.
What I've found is the following algorithm:
- Open HKEY_CLASSES_ROOT\<extension>\(Default). This gives us the class name that handles the file extension.
- Open HKEY_CLASSES_ROOT\<class-name>\CLSID\(Default). This gives us the GUID class ID of the application that handles the file extension.
- Open HKEY_CLASSES_ROOT\CLSID\<class-id>\InProcServer32|LocalServer32\(Default). This will yield the path to the executable which handles the file extension.
In some cases, it is enough to open the first key (the extension key) and you will find a subkey called OpenWithList with the first subkey being the name of the executable of the application (but this isn't always available).
I'm thinking there has to be a better way to do this programmatically, but I haven't come across it yet.
The following are the contents of an actual email that I received this afternoon:
I have a very hot Java Developer rek with a direct-client in Manhattan. Please send me candidates with rates and contact-numbers.
Interviews this week.
I would be shocked if Priya lands any candidate this year.
Let's dissect all of the ways that this email is completely ineffective:
I am a .Net developer. I know Java and worked with it for 4 years on college, but I have not worked with it in any sort of significant capacity since then. Assuming this email was generated based on my profile on either Dice or Monster, you'd think that the recruiter would attempt to filter the candidates...
"Hi cchen"? Why am I being addressed by my email account name? I'm pretty sure that I have my name listed on Dice and Monster. If this were to have come from some third party database somehow, wouldn't it make sense to just use something like "Good afternnon,..."?
"rek"? Take some time and type out the extra keystrokes and spell it out. A little professionalism goes a long way.
No. I am not doing your job for you.
Contrast this with another email from a Bradley G.:
My name is Bradley and I'm an IT recruiter at [XYZ]Technologies Corporation. Our records show that you are an experienced IT professional with experience in .NET. This experience is relevant to one of my current openings.
The opening requires ASP in addition to the above skills.
It is located in Pittsburgh, PA.
If you are qualified, available, interested, planning to make a change, or know of a friend who might have the required qualifications and interest, even if we have spoken recently about a different position. You may also send me an e-mail with a copy of your current resume. If you do respond via e-mail please include a daytime phone number so I can reach you. In considering candidates, time is of the essence, so please respond ASAP. Thank you.
If you are not currently seeking employment, or if you would prefer I contact you at some later date, please indicate your date of availability so that I may honor your request. In any event, I respectfully recommend you continue to avail yourself to the employment options and job market information we provide with our e-mail notices.
Much more professional and much more likely to actually attract a candidate, but it still gets a few items wrong:
- I did not apply for any job. It would have been better to use use a simple greeting like "Good afternoon" or "Greetings" or perhaps even something creative like "Hello from IT land!"
- I would have added "please responsd ASAP"...seems like a bad sales pitch.
Just some random afternoon ranting...
In trying to wrap my head around how solutions should be designed and componentized in SCSF/CAB, I've spent a bit of time trying to study up on Model View Controller (MVC) and Model View Presenter (MVP).
The packaged documentation, in my opinion, doesn't necessarily do a good job of covering these two topics and the variations of MVP that really make sense in CAB.
One of the best resources for discourse on these two topics is Martin Fowler's article on GUI Architectures, which provides a broad view of three of the most common underlying architectural choices for many of the GUIs that we work with today.
What I've observed is that PV is more "aligned" with the design of the components in CAB than SC, which Fowler summarizes:
The separation advantage is that it pulls all the behavioral complexity away from the basic window itself, making it easier to understand. This advantage is offset by the fact that the controller is still closely coupled to its screen, needing a pretty intimate knowledge of the details of the screen. In which case there is a real question mark over whether it's worth the effort of making it a separate object.
In my opinion, since the CAB generated presenter isn't coupled with a concrete implementation of the view, PV is the way to go since this will allow a lower level of coupling with the concrete implementation of the view. I went about this by adding interface methods to return Control (one could go as generic as Object as well) instances from the view which would then be rendered, wired, and databound by the presenter. In other words, it's not really Model View Presenter as the documentation would have you believe.
Regardless, this allows a great deal of flexibility in the design of the module as a whole as the view can truly be replaced completely independently of the presenter since the presenter is only coupled with the interface class. In addition, it allows for easy replacement of data visualization types using the adapter pattern to connect data to the control type (for example, having one adapter if the returned control is of type TreeView and another when the control is a DataGridView).
However, an even better description of the architectural intent of CAB modules is Humble Dialog, a pattern formalized by Michael Feathers. What Feathers terms "smart object" is congruent to the presenter, which "pushes data onto the view class" through a view interface. The view, of course, is the actual UserControl derived view class. Now this leaves a key question: what is the place of the model in such a pattern? Does it have a place? With SCSF (but not necessarily with CAB when used with desktop applications), the classic sense of the model almost has no place in such an architecture; the model is but a dumb container for data. It leaves you with what Fowler terms an Anemic Domain Model, an "anti-pattern":
The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data.
While Fowler views this pattern with disdain, I can't help but wonder whether it is the most natural choice for a smart client style application which should rely heavily on service layers to handle the data models. Otherwise, it would seem that one would end up writing a great deal of domain objects which were nothing more than shells (well, perhaps this is still useful if complex service interactions are necessary) which make the calls through the service proxies.
Some random stuff and some not so random stuff.
First, the Oral-B CrossAction Vitalizer is possibly the best damn (non-electric) toothbrush ever made. It's comfy on the gums, it gets to the back teeth, and the handle is just right for control of pressure and angle. The "soft" bristle isn't very soft at all, but I've found that it doesn't cause any pain or damage to my gums...it's just right.
Second, in reorganizing some of our code, I jumped headfirst into Microsoft's Smart Client Software Factory (SCSF). While the verdict is far from conclusive, I have to say: I like it. It implements several ideas I was tossing around in my head for a Windows Forms client but is obviously much, much more well developed and thought out.
I do find it annoying that it is somewhat difficult to visualize the relationships. That is the one advantage to using an XML based object configuration system as opposed to the attribute based system used by the Composite Application UI Block that SCSF is built on: it's quite easy to wrap your head around the relationships and see how the pieces fit simply by reading the XML. To be honest, it wouldn't be too terribly difficult to build similar (but certainly much less refined, given the amount of time I have) facilities with Spring utilizing the dependency injection, loosely coupled events, expression evaluation, and other components of Spring.
However, I think that the entire package, including the GAX pieces make this too compelling of a package to pass up. There is surely a huge learning curve for the framework and library itself, but I think it will be made up for with the gain in development and deployment speed.
It's kind of cool that it also supports WPF modules. I spent quite some time last night (up until 2 AM) trying to replicate some of the existing UI components that we have into WPF UIs. I think I'm starting to "get it". Not in the sense of why XAML is great for UI developers (I've always preferred declarative markup), but in the sense that I've kind of aligned myself with some of principles of XAML (i.e. layout, grids/tables, applying styles, backgrounds, etc. - which are of course slightly different than HTML) and I can now really appreciate how much easier it will be to create snazzy UIs in the future.
While I have Expresison Blend, I found it much more constructive to actually go into the raw XAML and write it by hand (well, not to mention that I find the UI hideous and unusable or perhaps I'm just way too Adobe-ized). It was quite slow going at first, but once it clicked, it picked up very quickly.
Ultimately, however, I don't think that I can use WPF for our next release. There seems to be some runtime instability with the current Visual Studio 2005 extensions (the November 2006 CTPs) to support WPF and WCF...I was only able to run my application once last night; subsequently, it would crash immediately. The problem was only fixed by rebooting my machine.
But in any case, there are some great SCSF resources out there. I would start from Cabpedia.com as CAB itself is really what presents the challenge in the Smart Client. Cabpedia has a list of great resources which helped me at least grasp some of the conceptual ideas behind CAB. In particular, a series of articles by Szymon Kobalczyk.
Warning: massive brain dump ahead...
As I was laying down to sleep and having a discussion with my wife - much to her dismay - the topic of her current graduate class came up and she mentioned how much she enjoyed just sitting down and writing for 45 minutes each class. I found it strange that she should put it in such a perspective. I mean, there's nothing preventing her from taking the time to sit down and write for 45 minutes each day (and she did keep a journal up until maybe 3 or 4 years ago) as surely, countless minutes of her day (and any average person's day) is spent doing mindless things like watching television or eating or something else equally useless.
The idea of commitment chains occurred to me as I was using an analogy about exercise and trying to convince her that writing for 45 minutes each day is relatively trival compared to working out. Think about it: in exercising, one starts a chain of commitments which can seem unconsciously daunting. To exercise is to sweat, to sweat is to necessitate an immediate shower (well, unless you don't mind body odor or the salty stickiness of sweat), to exercise necessitates a larger load of laundry, and most importantly, in this proposition, is that it necessitates a healthy lifestyle lest that exercise went for naught.
It is a relatively large commitment chain to make simply by exercising and perhaps this is why so many people find it so difficult to maintain a healthy lifestyle: the weight of this commitment chain is simply too heavy. On the other hand, writing for pleasure carries little commitment of any kind. You write if you want to, you don't if you are not in the mood.
What's the point? No point, really 😀 I guess if there was a point, then perhaps it is that very often in life, we don't really take into consideration how little commitment it takes to do what we want to do and do what we enjoy. We also fail to realize how these low commitment activities have a profound effect on our lives as they help us feel like we've done something. Simple things like taking a stroll around the block, watering some flowers, laying down and watching the clouds pass, sitting with a cat on the grass, drinking a cup of lemonade on a hazy summer afternoon. Perhaps that's the secret to finding balance in life: to have a healthy mixture of tasks with long commitment chains (work, family, health) mixed with activies of low commitment (I'm mixed on whether blogging is the former or the latter, but I do find it constructive to put thoughts to text some times).
Shifting gears now.
Prior to this discussion, we had another discussion about how we visualize dates. I was thinking back to something that I had once read about how to interview tech candidates: propose that some object typically comes in a set of 14. Now 5 additional elements are introduced...ask the candidate how he or she would organize the new elements.
Some people, like my wife, would tend to place the 5 elements "below" the 14 elements and line them up and start to form a multidimensional array - or a matrix, if you will. Some people like me, would visualize it as a separate block of elements, but in a linear manner...more like containment where the first set contains 14 elements and the second set contains 5, but they are part of yet a larger set. It is less of a repeating pattern and more of a general grouping.
This manifested itself clearly in the way in which we think about and visualize dates. For her, as day of the week is important, she tends to organize her events and key dates in a typical calendar fashion and in fact, she can visualize it so well, that given one event in a month, she can probably tell you the day of the week of any other date in the month nearly instantly. She views the set of 7 days in a week as a part of a matrix much as a calendar is typically visualized.
In my case, as day of the week is generally not that important, I visualize date and time as linear and quite abstract (I think the most natural way to think about it since it really is linear and absolute...it is only the incidental cyclical nature of our orbit around our Sun that defines constructs like seconds, minutes, hours, days, weeks, years and so on). In my case, I am terrible at remembering dates and I am terrible at remembering order; I only roughly index that I have something to do some time in the future. Ask me what I'll be doing two weeks from now, and it'll take me a good amount of time to figure that out whereas my wife's response will be nearly instantaneous. I tend to think of time in blocks where I have commitments (meetings, errands, and so on) and blocks where I don't have commitments.
When you really think about it, time itself is completely abstract (what is it? will it end? when did it start? how much of it is there? what does it look like? what is the absolute unit of time? can it really even be counted?), but the organization and demarcatinon of time into units seems...weird and useless to me; I am fine thinking about it in the abstract (i.e. "some time in the future, I need to do this") and not as an absolute (i.e. "on such and such date at such and such time, I need to do this" or "x units from now, I need to do this").
There is a parallel in my profession: as a software developer, there is nothing tangible about the constructs that I build; the contructs that I build are purely abstract in nature: every GUI, every construct in software, is but an abstraction of numerous lines of code - or, is it the other way around? Software is but one layer of abstraction on top of another...modern day software could not exist without the huge levels of abstractions that have been built to allow programs to be written efficiently. Buttons are not buttons, they are rectagles. Rectangles are not rectangles, they are arrangements of lines. Lines are not lines, but merely a linear set of pixels. But in essence, there is nothinig to grasp and to utilize to visualize proportion, all of it is purely hypothetical and kind of "uploaded" into my brain as a set of objects, relationships, and other abstract constructs when I sit down at my desk in the morning.
In actuality, I find this process of uploading and unloading quite unpleasant (particularly the unloading part). I have been told by my coworkers, wife, and family members that I can become quite unruly when I'm involved in my work. The reality of it is that when I'm in my groove, unloading and then loading so much abstraction and so much data causes some sort of mental instability...I just get frustrated at the individual forcing the purge or I just lose my groove and have to kind of veg for the rest of the day...I simply cannot be constructive.
From an observer's perspective, I think this makes me seem like a loner or anti-social or if a colleague is coming to me with questions, it may seem like I'm impatient or uncooperative. In reality, my bitter reaction is more of a defensive mechanism to kind of keep myself from having to go through these periods of derailment as in my case it's not a temporary derailment...it's like a long term derailment once it happens as there is simply too much data to store and reload that it's taxing on my mind.
For this reason, I think I've recently been in some hot water with some coworkers. I simply don't take afternoon interruptions very well as that is the time when it is hardest to recover from derailment at that point.
Of course, the whole reason that this discussion and train of thought came up was the movie Stranger Than Fiction (it's an absolutely brilliant screenplay with an absolutely excellent performance by Will Ferrell (everytime you think he's going to break into his "normal" genres, he surprises you and keeps his acting true to the character...a brilliant perfomance)).
This movie draws my attention on various levels: it is at once a deep inspection of what it means to live and to be alive, it asks what exactly is the scope of one life in the grander scheme of the universe, on some level it is a movie about religion (I haven't really fully formulated this part of it yet), and of course, it's a touching romantic comedy :-).
I also found the specials (and this isn't the first time) to contain some very insightful information on teamwork and project management that would apply to almost any field (but that's a discussion for another day).
What also caught my attention was how director Marc Forster and the visual effects team realized how Harold's thoughts were visualized with these planar "screens" with metrics, text, and data layered together. It's much the same way I visualize data, code, structures, and tasks, all on virtual screens that I slide around, stack, layer, and intermingle. I now realize that there is no organization to how I think about these constructs and abstractions...I simply see them in my mind as if before me was a stack of cards strewn about and yet I am able to reach out and pluck the ace of spades at will with no effort.
Maintaining such mental order requires a lot of effort and a lot of concentration. I think it is because of the amount of effort required to work the way that I do, that I am so unpleasant when interrupted (much to the dismay of my wife, mother, and coworkers). And believe me, it's not that I don't like to help others with the development issues or educate other developers and team members, rather such tasks are not my primary concern and shifting gears is extremely difficult when you have to maintain such large abstractions and structures in the mind.
So of course, the question is, what is the solution? Well, perhaps I need to invest some time in some organizational books. Perhaps I need a whiteboard to help unload some of the data and make it easier to reload as well. Perhaps I need a bigger desk so I can scribble more and keep better notes.
Well, I think that about wraps this up. Possibly not the most coherent or well organized entry, but it contained data would have kept me up all night if I didn't unload it 🙂
There used to be a time, decades ago, when there was only one telephone carrier and everyone was forced to use it, regardless of whether the service or price sucked.
Nowadays we have a much greater variety of choices from AT&T to Verizon to MCI for local and long distance calls. We also have some new comers to the game such as Comcast and Cablevision who offer telephone service over cable.
For the longest time, my mother was using MCI for her local and long distance. For whatever reason, she suddenly decided (as she is oft inclined to do) that it cost too much. We decided to switch to AT&T as she felt that it was a trustworthy and reliable brand. Little did we know that the new AT&T seems to outsource its customer service, charges a hefty connection fee (even when no physical connection setup was required), and she ended up spending exactly the same each month as she did with MCI...
Jump forward a few months after the AT&T debacle (they were still trying to get her to pay a connection fee...). After a year, her promotional rate with Comcast for Internet connectivity jumped dramatically. At this time, her best option - of course - was to switch over to the Comcast Triple Play. We were assured that the cable telephony was a good choice and that the battery backup on the modem meant that even when the power went out, we would still have dialing capabilities.
Of course, what they failed to mention was that if the Internet connectivity gets flaky (as is oft the case with Comcast), so does your ability to use the phone...D'oh! Well, it should have been obvious to me, but I dunno, I was thinking that maybe the modem had special capabilities that allowed it to operate indepenently of the Internet connectivity. Turns out that every once in a while, we'll pick up the phone and there will be no dial tone because the modem loses connection or the DNS servers are down somewhere on the grid or some other issue. It also turns out that the special telephony modem that we have to use is noticeably slower at servicing Internet traffic compared to my previous Motorola (blazing fast); there is now a noticeable lag when frequenting some of the web pages in my daily queue.
For the time being, the promotional price is great: about $33/month ($99/month for Triple Play for one year) for unlimited long distance to anywhere in the US. This is much better than what Verizon or AT&T charges for the same features (about $50/month). What they don't always make so clear is that after a year, the price jumps dramatically to $140.95/month or roughly the same price for telephone service as with Verizon or AT&T...except without the reliability of the good old PTSN.
If you really sit down to think about it, that comes out to roughly $600/year for phone service. That's PS3 territory.
But there is an alternative, there is a brave new world in telephony: Skype (okay, it's really not that new, but I don't personally know anyone who uses Skype exclusively of landlines (although I know a few who use cellular lines exclusively)).
I signed up for a free trial at the end of last year that gave me 30 days of SkypeOut for free. I found the service to be generally acceptable and convenient (since I spend almost all day in front of the computer anyways).
But what makes Skype even more compelling are the new accessories which are being developed around it: standalone (no PC requried) devices which allows one to use Skype as a total replacement for landelines.
- The latest DECT technology
- Multi-handset capable (up to 4 each)
- Dual mode (supports PTSN and Skype)
- Don't require PC to use
What seals the deal is that SkypeIn, which allows you to get a number that any landline or cellular line can dial and features unlimited calls anywhere in the US to landlines and cellular lines (and of course free calls to any other Skype user), costs only $60/year. So for a tenth of the cost of traditional landlines or cable telephony, I can get roughly the same quality services and I can call from my computer. I also think that the portability is also cool as hell...I can answer my phone anywhere in the world as long as I'm connected to the Internet.
I convinced my wife that when we move this time (just about 20 days to go), we're gonna try to go cold turkey with Skype (we're went with the Netgear phone) and see if it'll work for us. We both make long duration long distance calls pretty regularly for our jobs so it'll be interesting to see how it works out. For us, 911 capabilities is not an issue as we both have cell phones. Dependency on the Internet connection is also not a problem as it's no worse than Comcast or Optimum and whenever we tend to be on long important calls, we also tend to be in some sort of net conference...so having the reliability of PTSN is kind of pointless if the net meeting is down.
So overall, I'm excited to stick it to the man 😀
I'll keep this site posted with my review and experiences as I spend more time with Skype and the Netgear phone.
Argh! Chalk this one up to poor product description, packaging, or something like that, but it wasn't clear at all that one needs to purchase SkypeOut/Skype Unlimited to receive the unlimited outbound calls. In essence, $60 only buys an inbound number and unlimited inbound calls...outbound calls with SkypeIn are still charged at local/long distance rates.
I'm kind of conflicted...on the one hand, dude, it's $90 for a whole year. On the other hand: Damn these people for not clearly advertising their services and costs and using sensible bundles to do so.
In describing my approach to software development, I like to use the term practical artistry. What does this mean exactly?
Well, the practical part of it is that the class libraries, interfaces, and components have to work the way that they were designed. They should also be easy to use, easy to understand, easy to integrate with.
The artistry portion of this term is much harder to quantify. Just what is artistry when used in the context of software development?
Art Tatum offers a very compelling definition:
Art is a method of communication which unifies surface details and form while taking both the intended meaning and aesthetics into account. This requires significant amounts of problem solving. The artist is constantly asking, "How can I best express this idea without ruining the proportions of the work as a whole."
This ties in with Fred Brooks' principle of conceptual integrity for it is the artist alone who sees the proportions of his work and the artist alone who shall ensure that the work abides by the guidelines of the orginal design intent.
Another term that I like to use is exemplary craftsmanship. This concerns the little details that make code aesthetically pleasing, readable, and well crafted. Many small details affect this quality of code such as consistent naming schemes, using extra keystrokes and not abbreviating non-standard terms, ensuring that spacing is consistent, formatting code in a consistent manner so as to make it more readable, and commenting public APIs. On a higher level, it concerns the organization of code and the clear separation of domains (not in the Fowler sense, but in a more abstract sense) in a manner that enhances the extensibility and orthogonality of the code.
It's not just code, it's any trade. A panel and house wired by a master electrician will certainly be different than a panel and house wired by an apprentice. The cabling will be neat, the runs will be well thought out, the circuits will be well labled, the panel will be well organized, little details will have been considered, and the artistry of the finished work is apparent even to laymen.
With this in mind, I consider myself to still be an apprentice; I still seek to learn the trade from a master craftsman and I still seek to hone my skills and develop my artistry so that I may also craft software of a high caliber. But I work hard to ensure that some sense of practical artistry and exemplary craftsmanship is apparent in everything I do. From simple tasks like ensuring proper indentation in my source files, selecting the right margins in my documentation, and using the right fonts to more complex design issues like organizing libraries in proper dependency chains, achieving orthogonality in modules, and organizing objects in consistent and well defined fashions (i.e. utilizing design patterns).
Unfortunately, in my short career, my interactions with other developers have left me disappointed on this front for the most part except for three individuals whom I didn't have nearly enough time to interact with (this is not to say that I haven't worked with many fine developers, but three stand out as practicing these principles of craftsmanship). These three were true craftsmen in the sense that the little details mattered to them. Improving their skills as developers was an important aspect of everyday development. Writing good code and following well known guidelines and principles meant something. The naming of every class, of every variable, required at least some passing thought so as to ensure that each construct was congruent and aligned with the whole.
Of course, such discussion of artistry as it applies to software is not just frivolous academia, Maarten Boasson writes in The Artistry of Software Architecture:
Designing software is not very different from designing any other complex structure: Few people are good at it; no single recipe always produces a good product; and the more people involved, the smaller the probability of success. On the other hand, a design produced by someone who is good at design provides an excellent basis for long, reliable service.
Software engineers consider the artistry of the design not only evaluating aesthetics but also the practical results of such a design such as orthogonality and added extensibility. Boasson further comments:
In exceptional cases, a good software design is no less valuable than the great masterpieces that have been created throughout our rich history. Examples of both bad and good designs can be found all around us, in almost every engineering field; practically everyone recognizes a piece of art when they see it.
So I often wonder why it is the case that I encounter and, of much greater concern, find high degrees of tolerance for bad design. Not just bad design, but bad development practices like inconsistent usage of formatting elements (spacing, newlines, tabs), naming namespaces and classes against well established guidelines and practices, and other details like inconsistent casing. Leadership just doesn't seem to care for the most part and it requires the rare and truly inspired individual project manager to understand the long term value of encouraging practical artistry and exemplary craftsmanship.
Bear in mind: it's not that I approach writing software with any sort of artistry or snobbery in mind...indeed, a good portion of it is the grunt work - simply putting the hammer to the nail, or putting the brush to the canvas, so to speak. But at the end of the day, there is a personal satsifaction that is achieved from not just writing any code, but writing good code. There is a satsifaction that comes from recognizing and implementing a superior system design. Of course, it goes beyond personal satisfaction, good design, as Boasson writes, can provide long term value in the form of extensibility, maintainability, and reusability.
In almost all cases, as the majority of developers are not self motivated to write such code, it takes strong leadership, clear definitions or design guidelines, and enforcement of the policies to ensure that quality software - not just working software, but quality software - is crafted.
To me, it is the little details that go towards creating a better product. There is certainly a time and place for prototyping and RAD - and certainly, I utilize these techniques all the time, but there is also a time to formalize the lessons learned from such exercises and to create a masterpiece...to write code that you would have no qualms about showing to the world and exposing it to critique.
It is with this in mind that I find myself currently flustered. Is it just me? Am I being too uppity about all of this? It's hard to say...I am truly conflicted about this as I cannot see how I can work productively and cooperatively in a team with people who do not honor the same sense of craftsmanship and artistry. I awake and find that someone has scribbled on my canvas in a weak imitation of my style using colors that clash with the existing palette. Analogously, I open the electrical panel to find that someone has stuffed some low grade wire haphazardly into the panel without any clear labeling. I cannot help myself but cringe at the thought of integration - yes cringe. I don't want to deal with ugly code and yet the leadership doesn't seem to care one way or the other and I am powerless to affect change (partly because I am a blunt edge and possess no sense of finesse whatsoever in dealing with these situations)...*sigh*
I can only hope that some of my desire to achieve practical artistry in code and design inspires others on my team, otherwise this will be painful to endure.
Hmm...I thought it was kind of odd that Visual Studio didn't support intellisense for WCF configuration elements (but then again, I've kind of become intimately familiar with the core elements without having the schema).
Turns out that you have to take care of it manually by updating the schema file for configuration.
Helps configuration go much more smoothly until Orcas arrives.
I'm surprised that this hasn't been posted across more blogs since it seems like a big, big bonus to have this for developing WCF applications. Is it common knowledge? Was there an SP that updated the schema files? I know that installing .Net 3.0 didn't change the configuration files on my system at least.
If there's one thing that really irks me is allegiance for the sake of allegiance.
Don't get me wrong, I have nothing against Microsoft Enterprise Library and in fact, in absense of any better alternatives, EL is indeed the best option for a whole slew of development tasks and better than rolling one's own solution. But of course, the question is then: are there better alternatives.
Take logging for example. There has been a movement in our team to use EL instead of log4net. In all honesty, it's not a battle that I want to wage...it's simply not worth it for logging, but I think that for anyone considering one or the other, some serious evaluation is in order. EL logging simply cannot hold a candle to log4net.
As I commented on James Newton Kings' blog:
About 2 years ago, I started to seriously integrate logging into my applications. During that time, I went through several rounds of testing both log4net and EL and I am currently in a discussion with other developers in my group on whether we should use the best available tool (log4net 1.2.11) or the do it the Microsoft way (EL 3.0/3.1).
In my opinion, the lack of hierarchical loggers is a significant weakness of EL as it requires much more thought and planning to make sure that you can isolate components down the line. While flexibility is good, I think that having the free form category, priority ID, and event ID make it far more confusing than it has to be...18 overloads? Do you expect people to remember or use the right category every time? What about misspellings? Of course, one could always write a layer to abstract these weaknesses of EL such as add a layer to translate Log.Debug() to some internal EL call, but then you have to ask if you've gained anything in the process.
Aside from this, I'm not sure if you are aware, but log4net utilizes buffered appenders for certain classes of appenders with the ADO.Net appender being one of those. What does this mean? I think you will see significant performance increases with buffered appenders (say buffer 250 messages in one flush versus making 250 individual inserts).
In addition, the procedures that are used by EL (although they can certainly be modified) add overhead out of the box. For example, if WriteLog is called it *always* makes a second call to AddCategory, regardless of whether the category exists). This is a round trip call from SQL Server to the calling client since there are no triggers on insert to the Log table and there is no call to AddCategory from WriteLog. In addition, in the scenario that you end up with a category name that isn't in the log (yet another lookup), *another* lookup and insert is performed via InsertCategoryLog. Since there are FK constraints on the tables, each insert also incurs an additional lookup behind the scenes to validate the foreign key.
Aside from this, I've found that log4net tends to fail gracefully whereas EL...not so much. For example, if you try to use logging, but fail to include the correct configuration, the exception will bubble up to your application. On the other hand, log4net will always trap the exception and not let it bubble up to your application and thus not raising an error where you would not expect. Sure, you could put try-catch around all of your logging statements if you're using EL...but then, you have to ask yourself WHY?
I also forgot to mention that log4net, out of the box, allows loading of configuration files from any XML string (in application code without recompile of the source code). This means that if one were so inclined, logging configuration would be stored on a central server, downloaded, and loaded on the fly. If one were so inclined, it could be stored as embedded content with the assembly. This flexibility allows for a great variety of design scenarios out of the box.
James also has two incorrect interpretations of the facts:
- log4net is not being developed actively. Indeed, there was a looong period of inactivity on the realease side while the log4net project was going through integration with the Apache Software Foundation, but during that whole time, the mailing lists were active and so were the developers in helping users, answering questions, and certainly making todo lists for the release after incubation. I would also offer the opinion that log4net should have less releases as it is the more mature, tested, and stable of the two libraries.
- EL is somehow more extensible than log4net. This simply insn't true as both provide the source and both provide extensive SDK documentation, samples, and examples.
Commentor Smitha Mangalore also raises an interesting point: log4net supports logging across more platforms than does EL at this point (not to mention supporting more database targets out of the box than EL).
Just my take. I think both have their place. For application event logging and tracing, nothing beats log4net in terms of performance, ease of use, ease of configuration (yes, I find the configuration to be much more concise and understandable with log4net), and overall usability.
So my interest was piqued: this whole discussion made me want to check the performance difference for myself.
So here are the test conditions:
- CPU: E6400 Core 2 Duo (2.13GHz @ 3.2GHz),
- Memory 4 GB RAM,
- Disk 1: 36GB @ 10,000 RPM
- Disk 2: 36GB @ 10,000 RPM
- Disk 3: 250GB @ 7200 RPM
- Disk 4: 74GB @ 10,000 RPM
- Test Scenario:
- The database will be dropped each before each test run.
- For EL, the default procedures and tables will be used.
- For log4net, the sample configuration and table definition will be used.
- A Stopwatch instance is created and started before the code enters a loop with 100,000 iterations. The instance is stopped as soon as it exits the loop and the result is written to the console.
- For each pass, different sets of logging commands will be commented out (in retrospect, I should have used conditional compliation symbols).
- The only listener/appender is a database listener/appender.
- EL: 138s
- log4net w/buffer = 1: 116s
- log4net w/buffer = 100: 78s
- log4net w/buffer = 250: 76s
- log4net w/buffer = 500: 78s
- Constant Work Time: 18s
So there you have it: with buffering enabled, log4net is up to nearly 2x as fast as EL.