When writing server-side sinks, it's important to remember that order matters when declaring your sinks. For example, consider the following block in a config file:
<provider type="LoggingSink.ServerSinkLoggerProvider, LoggingSink" />
<formatter ref="binary" typeFilterLevel="Full" />
The problem here is that if ServerSinkLoggerProvider is a class that implements IServerChannelSink, the requestMsg parameter will be null in the ProcessMessage method. If you reverse the ordering of the 2 lines (i.e. put the formatter first), everything works fine.
This is by design, and due to the chaining of sinks that occurs in .NET Remoting. In the first case, the formatter has not yet deserialized the message, so we can't easily get to the IMessage that was passed to us. In the second case, the formatter takes care of the deserialization first, and can then pass the IMessage to the DispatchChannelSinks for further processing and investigation. The overview of .NET Remoting layers from MSDN is lacking a lot of detail, but hints at what causes the differences. Once I went back to Ingo Rammer's book, the technical reason for this behavior became obvious.
I use OutputDebugString
(ODS) a lot when trying to trace through a problem with an application. Coupled with using DebugView
from SysInternals, you can easily get a real good idea of where things are going wrong.
In .NET, you use Debug.WriteLine() to write ODS messages. This requires that your assemblies be built with DEBUG information to see the ODS messages. When working with ASP.NET pages, you have a some options to achieve this (the easiest couple are listed here):
- Include the Debug attribute in the Page tag for the aspx page. For example:
<%@ Page language="c#" Debug="true"%>
- Include the debug attribute in the web.config file. For example, put this in a section underneath the <configuration> node. You can also do this by changing the config file when running the IIS Manager, selecting Properties for the Virtual Directory, going to the ASP.NET page, selecting Edit Configuration, going to the Application tab, and selecting Enable Debugging.
<compilation debug="true" defaultLanguage="c#" />
Someone on the newsgroups posted a linke to SourceMonitor
. It is a tool that analyzes source code (Delphi, C++, C#, Java, VB, C, and HTML) and reports on various code metrics, like the number of lines of code, average statements per method, methods per class, depth of method, global routines and variables, cyclomatic complexity
of a method and tons more. The charts are incredible (Kiviat graphs that are configurable with tolerance levels), and you can drill down and sort by any criteria. By doing this, you can identify methods that need to be refactored. In addition, you can View Source on the details and get buttons to take you straight to various offending code metrics (max depth, max complexity method, etc.). The detail view also shows graphs and more information about each unit. Lastly, you can take multiple snapshots so you can monitor progress of your source code's health over time.
This tool is listed as being freeware, but this is one instance where you get WAY more than you pay for.
SmartInspect is a new logging tool published by Gurock Software
. This tool works with Delphi (including BDS2006), .NET and Java applications.
By adding some simple code to your application, you can get SmartInspect to keep track of the logging in a meaningful way (by process, thread, method, session, etc.). Log events that are "children" of other events can be rolled up and grouped to allow you to focus precisely on what you're interested in. Logging can be done via file or TCP/IP, so you can coordinate log messages from multiple machines. Source code is included for the objects that SmartInspect uses, and you can extend the objects to do whatever you dream up (filter packets, automatically colorize certain events, etc.). You can even install Code Snippets/Templates into the IDE that you use. There are many more features than this, but those are the ones that really stood out for me.
I would highly recommend this product. This company has gone from nothing to having one of the most professional product experiences I have ever seen. They have a very complete, nice-looking, and easily navigable web site; incredibly great documentation (both in general and for developers); trial versions; multiple support options (forum, knowledge base, and email); a blog; walk through samples; and an extremely well-polished user interface. I wish all companies came on to the scene with such a thorough attention to detail.
In short, great job guys. I look forward to using this product more as time passes.
There are times when you get painted into a corner. Some times, you paint yourself in, and some times, you get painted in by others. If you do it to yourself, it is much easier to get out by just changing all of the things needed to get out of the jam. Other times, your hands are tied and you can only make changes to specific areas of code, like the implementation section. A couple of reasons for this would be to preserve binary compatbility of a published interface for your framework, or a 3rd party component relies on that dcu compatibility of another 3rd party. Given all of the above, what do you do if you want to add a property to an existing class without changing the interface section of that unit? I came up with the following hack to get this done, but be cautioned, this is definitely not for the squeamish! :)
Let's say you have a setup like the following:
TType = class
property ID: integer read FID write FID;
function GetType: TType;
gblType: TType = nil;
function GetType: TType;
if gblType = nil then
gblType := TType.Create;
Result := gblType;
if Assigned(gblType) then
As time passes, you realise you want to only do the ShowMessage some of the time, and to do this you think a CanDisplay property would be perfect on the TType class. But remember, you can't change the interface section. You can add code like this to get things ready (only the changed section is listed here):
TDerivedType = class(TType)
property CanDisplay: boolean read FCanDisplay write FCanDisplay;
function GetType: TType;
if gblType = nil then
gblType := TDerivedType.Create;
Result := gblType;
if (Self is TDerivedType) and TDerivedType(Self).CanDisplay then
Note that the main changes are to add the new class - derived from the base class - in the implementation section; update the singleton TType return function; and update the Display method to conditionally display the message.
In order to set this property, we will rely on RTTI since callers outside the scope of this unit will have no idea what the TDerivedType is. The code for this would look similar to this:
procedure SetCanDisplay(t: TType; Value: boolean);
if IsPublishedProp(t, 'CanDisplay') then
SetOrdProp(t, 'CanDisplay', ord(Value));
procedure TForm3.btnDerivedClick(Sender: TObject);
t := GetType;
t.ID := 30;
Obviously, this technique should only be used as a last result. However, it can get you out of that tight spot for a little while, and the changes are pretty well self-contained so you can do it right when building the next version.
Date: March 16th, 2006
Time: 6:30pm - 9:00pm
Location: Marriot West
Driving Directions:"Former Highway 164 North" is now Highway F North. Or, just pay attention and take Exit 295 (as they mention in the directions on the web page).
What: John Kaster will be preseting the new features of BDS 2006, which includes Delphi, C#, and C++ personalities. This release has been rock solid for us, so I imagine this will be one of those must-upgrades for everyone who uses Borland products. Last year, John raffled off 2 copies of BDS (one Architect, and one Pro) to give away because we had good numbers, so be sure to get the word out if you want a chance for that kind of give-away again. There should also be discount coupons available. There is also typically good swag to be had, and the presentations John does is second to none.
Please RSVP to email@example.com so we can start getting a head count to make sure the room is sized appropriately.
Scott Simonson was able to recover some of the mailing list that we used last year, but I'm sure it's not complete. Please forward this link, or the upcoming email to any interested Delphi, C#, or C++ developers and have them email me at firstname.lastname@example.org to confirm attendance.
For those that care, it's also listed in EventCentral.
I keep expecting ADO.NET to work as well as Delphi 1 did 10 years ago with respect to data access, and as a result, my expectations keep getting dashed. Of course, some things (like MIDAS) only materialized in Delphi 3, so that's a scant 8 years ago. :-( It seems that all of the collective wisdom in the .NET world to remote data (via .NET Remoting or Web Services) consists of one of 3 approaches, with zero tolerance for deviation.
- The DataSet approach.Use 2 methods for each entity that you want to remote. For example, you find numerous posts in google and references on MSDN where you need to call myAppServer.GetCustomer() to get a DataSet and then call myAppServer.UpdateCustomer(DataSet ds) to update the customer. Repeat this over and over and over again for each entity.
- The built-in serialization approach. Failing #1 above, people then start to tell you to create true business objects. You just need to take all of the tables that you use, create a bunch of objects, map the objects to the DB, and you're off and running. You can also use frameworks like Rocky Lhotka's CSLA.
- The ORM approach. ObjectSpaces has died, but that doesn't mean ORM is dead. There are a variety of options here. To name a couple that range from free to commercial, and vary in features: NHibernate, which is an open-source port of the Java persistence framework, Hibernate; and LLBlGen Pro. Of course, this means you need to buy into the framework you use.
However, what I really wanted was a way to remote data, and not worry about the more OO-centric techniques at this point. As a result, I wrote a framework, DrTier, to do just that. I now have it working the way I want in .NET. DataSets are streamed between client and server, user code is minimal, the app servers are extensible, and I'm able to take advantage of the best things that .NET has to offer. However, ADO.NET is not among them.
For example, if you have a stateless server, you cannot guarantee the SQL statement that was used to Fill the DataSet will be the same when you get back to the app server to update the data. I ended up using the DataSet.ExtendedProperties property to cache the SQL select statement and pass it around between client and server. By doing this, I can guarantee that I'm building the appropriate INSERT/UPDATE/DELETE SQL statements (DML) when I need to update the DataSet.
Speaking of which, ADO.NET wants you to create DML statements for every table. There are countless posts and articles chastising the use of CommandBuilder (poor performance, unoptimized for MSSQL, etc.). Creating your own DML statements at run-time is no picnic, even after we've solved the above problem. If you get schema information based on your SELECT statement, you will see that the types for each field are provider-specific. That means that you would need to have some kind of mapping between provider-specific types and DbType, or find another solution to parameterize your queries (dynamic type instantiation based on the string types returned in GetSchemaTable comes to mind as one possibility).
Another good lesson learned is that when using the Data Access Block, I have found that I can't take advantage of most of the methods in that block because they aren't customizable at all. You want a DataSet loaded with schema information? Good luck. Now you're using Database.DbProviderFactory to create concrete classes like you do in straight ADO.NET. The helper methods lack extensibility, so you're forced into this pattern. Returning fully formed DbCommands that point to a shared DbConnection isn't really even supported. You need to do that manually, too.
I won't go in-depth on the other features I found lacking (unlike Delphi), like ProviderFlags, UpdateMode, TFields, extensive events during reconciliation, robust error resolving support, etc., etc., etc. It seems that ADO.NET forces you into a pattern, and if you want to deviate from that pattern, you better be prepared to work.
Yes, I have an ulterior motive. I want my app servers written in .NET, and I want them to behave like MIDAS app servers so that I can call them from existing Delphi Win32 clients. Next up, I will need to write the interop code to get things working between MIDAS and DrTier. Once all of that is done, we can get our feet wet in .NET without resorting to a complete and total rewrite on both the client and server. So far, so good, but the finish line is a long way off, and I'm afraid ADO.NET will fight me every step of the way.
I've long held that knowledge comes in one of 2 flavors: borrowed or earned. Let me illustrate by example.
If someone comes to me and says "How does Application.OnException work?", the odds of that person ever retaining that information and converting it into knowledge is rare. BOCTAOE. The exception would be people who are human sponges who can soak information in, and retain it, much like trivia experts. The information becomes "borrowed knowledge".
On the other hand, if someone takes the time to look at the online help and/or manuals, search through the source code to see how it works, searches online, and writes test cases to exercise the functionality, then they have a much better chance of retaining the information. In short, they have earned the knowledge. You can start by borrowing knowledge to figure out where to go/what to do in order to earn the knowledge, but the last step of truly obtaining "earned knowledge" falls on you.