So I've been working with Enterprise Library 2.0 (EL2) Logging Application Block recently and I've come across some quirks that are puzzling me.
First, I've been using log4net for most of my logging in the past. Recently, I've taken a look at NLog due to the fact that log4net is currently under "incubation" and has been inactive for a loooong time. The developers are still active as shown by the activity in the the mailing lists, but otherwise, the codebase has been kinda sitting there for quite some time (until recently) with no date on when it'll exit incubation.
Anyways, after checking out some performance numbers on EL2 vs. log4net, I was sold. Easy configuration via the configuration GUI, easy to understand, tons of documentation, and it's first party Microsoft (easy to get team members and managers to buy into it).
So here I am working with it today and setting up my test code to automatically regenerate the database before each run and my application code crashes when the logging fails (exception). I had mis-typed the path for one of my SQL files and the database wasn't created for the logging block, but still, I don't think that the right thing for EL2 to do is to allow that logging error to bubble up to the application code. With log4net, if the connection to the log database is broken, the AdoNetAppender will simply fail but not cause the rest of the application code to fail. [Update: can't reproduce it, but I know this is what cause the error since as soon as the database was there, it was happy, but it's running fine now even without a database. Ugh, totally puzzling...]
Weird design choice. I guess it's useful to know that your logging block is failing. But what the heck, isn't that why there are multiple listeners so that if one fails, you have a fallback (i.e. log all critical errors to database, event log, and flat file)?
Secondly, as I'm looking at the database scripts for creating the procedures and database tables for logging included with the EL2 source code, I'm puzzled by the design choice.
Take a look at the code for adding a category:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
CREATE PROCEDURE [dbo].[AddCategory] -- Add the parameters for the function here @CategoryName nvarchar(64), @LogID INT AS BEGIN SET NOCOUNT ON; DECLARE @CatID INT SELECT @CatID = CategoryID FROM Category WHERE CategoryName = @CategoryName IF @CatID IS NULL BEGIN INSERT INTO Category (CategoryName) VALUES(@CategoryName) SELECT @CatID = @@IDENTITY END EXEC InsertCategoryLog @CatID, @LogID RETURN @CatID END
First of all, why are the categories stored in a seperate table? My guess is that the designers wanted to save some space in the log entry row by taking out the category from the log entry??? I can't seem to come up with another good reason for it since it's not like the categories in the category table are associated with an application identifier (and they must all be unique category names). Profiler tells me that it requires at least 14 reads to write one entry into the log.
Not only that, the code to execute adding the category and adding the log entry are two seperate calls from the client since the WriteLog procedure doesn't receive category information. I'm going to go out on a limb and say that the only reason that EL2 logging is able to outperform log4net is due to .Net 2.0 related optimizations.
So I think it's back to log4net for me. I don't know how the rest of the team will take it, but it seems to be a better choice.