Date: March 16th, 2006
Time: 6:30pm - 9:00pm
Location: Marriot West
Driving Directions:"Former Highway 164 North" is now Highway F North. Or, just pay attention and take Exit 295 (as they mention in the directions on the web page).
What: John Kaster will be preseting the new features of BDS 2006, which includes Delphi, C#, and C++ personalities. This release has been rock solid for us, so I imagine this will be one of those must-upgrades for everyone who uses Borland products. Last year, John raffled off 2 copies of BDS (one Architect, and one Pro) to give away because we had good numbers, so be sure to get the word out if you want a chance for that kind of give-away again. There should also be discount coupons available. There is also typically good swag to be had, and the presentations John does is second to none.
Please RSVP to firstname.lastname@example.org so we can start getting a head count to make sure the room is sized appropriately.
Scott Simonson was able to recover some of the mailing list that we used last year, but I'm sure it's not complete. Please forward this link, or the upcoming email to any interested Delphi, C#, or C++ developers and have them email me at email@example.com to confirm attendance.
For those that care, it's also listed in EventCentral.
I keep expecting ADO.NET to work as well as Delphi 1 did 10 years ago with respect to data access, and as a result, my expectations keep getting dashed. Of course, some things (like MIDAS) only materialized in Delphi 3, so that's a scant 8 years ago. :-( It seems that all of the collective wisdom in the .NET world to remote data (via .NET Remoting or Web Services) consists of one of 3 approaches, with zero tolerance for deviation.
- The DataSet approach.Use 2 methods for each entity that you want to remote. For example, you find numerous posts in google and references on MSDN where you need to call myAppServer.GetCustomer() to get a DataSet and then call myAppServer.UpdateCustomer(DataSet ds) to update the customer. Repeat this over and over and over again for each entity.
- The built-in serialization approach. Failing #1 above, people then start to tell you to create true business objects. You just need to take all of the tables that you use, create a bunch of objects, map the objects to the DB, and you're off and running. You can also use frameworks like Rocky Lhotka's CSLA.
- The ORM approach. ObjectSpaces has died, but that doesn't mean ORM is dead. There are a variety of options here. To name a couple that range from free to commercial, and vary in features: NHibernate, which is an open-source port of the Java persistence framework, Hibernate; and LLBlGen Pro. Of course, this means you need to buy into the framework you use.
However, what I really wanted was a way to remote data, and not worry about the more OO-centric techniques at this point. As a result, I wrote a framework, DrTier, to do just that. I now have it working the way I want in .NET. DataSets are streamed between client and server, user code is minimal, the app servers are extensible, and I'm able to take advantage of the best things that .NET has to offer. However, ADO.NET is not among them.
For example, if you have a stateless server, you cannot guarantee the SQL statement that was used to Fill the DataSet will be the same when you get back to the app server to update the data. I ended up using the DataSet.ExtendedProperties property to cache the SQL select statement and pass it around between client and server. By doing this, I can guarantee that I'm building the appropriate INSERT/UPDATE/DELETE SQL statements (DML) when I need to update the DataSet.
Speaking of which, ADO.NET wants you to create DML statements for every table. There are countless posts and articles chastising the use of CommandBuilder (poor performance, unoptimized for MSSQL, etc.). Creating your own DML statements at run-time is no picnic, even after we've solved the above problem. If you get schema information based on your SELECT statement, you will see that the types for each field are provider-specific. That means that you would need to have some kind of mapping between provider-specific types and DbType, or find another solution to parameterize your queries (dynamic type instantiation based on the string types returned in GetSchemaTable comes to mind as one possibility).
Another good lesson learned is that when using the Data Access Block, I have found that I can't take advantage of most of the methods in that block because they aren't customizable at all. You want a DataSet loaded with schema information? Good luck. Now you're using Database.DbProviderFactory to create concrete classes like you do in straight ADO.NET. The helper methods lack extensibility, so you're forced into this pattern. Returning fully formed DbCommands that point to a shared DbConnection isn't really even supported. You need to do that manually, too.
I won't go in-depth on the other features I found lacking (unlike Delphi), like ProviderFlags, UpdateMode, TFields, extensive events during reconciliation, robust error resolving support, etc., etc., etc. It seems that ADO.NET forces you into a pattern, and if you want to deviate from that pattern, you better be prepared to work.
Yes, I have an ulterior motive. I want my app servers written in .NET, and I want them to behave like MIDAS app servers so that I can call them from existing Delphi Win32 clients. Next up, I will need to write the interop code to get things working between MIDAS and DrTier. Once all of that is done, we can get our feet wet in .NET without resorting to a complete and total rewrite on both the client and server. So far, so good, but the finish line is a long way off, and I'm afraid ADO.NET will fight me every step of the way.
I've long held that knowledge comes in one of 2 flavors: borrowed or earned. Let me illustrate by example.
If someone comes to me and says "How does Application.OnException work?", the odds of that person ever retaining that information and converting it into knowledge is rare. BOCTAOE. The exception would be people who are human sponges who can soak information in, and retain it, much like trivia experts. The information becomes "borrowed knowledge".
On the other hand, if someone takes the time to look at the online help and/or manuals, search through the source code to see how it works, searches online, and writes test cases to exercise the functionality, then they have a much better chance of retaining the information. In short, they have earned the knowledge. You can start by borrowing knowledge to figure out where to go/what to do in order to earn the knowledge, but the last step of truly obtaining "earned knowledge" falls on you.
The Application Blocks are released here
. Good documentation. Good material. Good code. Good practices. Download it now and try it out.
I'm certain this one has been covered before, but I stumbled across it yesterday and thought it was quite cool. In Internet Explorer (IE) and Outlook Express (OE), you can hold down the Ctrl key and use the mouse wheel to increase/decrease the font size. This also means that applications that use IE (like the BDS Welcome Page) get the same benefit.
If you want to retrieve content from a web server that supports gzip compression, you have a few ways to get data from this web server, of which, here are a couple:
- Use TIdHttp and assign IdHttp1.Request.AcceptEncoding to blank. If you do this, a Get call to the web server will return the full text and you will not see the benefits of the gzip compression that can occur on the web server. Not exactly what we want.
- Use TIdHttp and assign IdHttp1.Request.AcceptEncoding to 'gzip'. If you do this, the web server will return gzipped content when doing an IdHttp1.Get(URL) call.
If you use #2 above, you will need to manually decode the returned data. I found the LVK components from the LVK web site to be most helpful in this task. I couldn't easily get the standard zlib or Indy10 zlib implementation to work with gzip encoding, so I ended up using LVK. Using the code below, you get the data back in compressed format and decode it. As an aside, it looks like LVK also supports the deflate option in addition to the gzip option, and has TONS of components and utility code.
// add lvkZLibUtils to the uses clause
procedure TForm3.Button2Click(Sender: TObject);
inStream, outStream: TMemoryStream;
inStream := TMemoryStream.Create;
outStream := TMemoryStream.Create;
outStream.Position := 0;
I logged this QC report
about changes to the Uses page in the TLE not being saved. The report was marked as fixed, but I kept seeing the problem. The change in D2006 was that if you referenced another TLB in your TLB, but didn't reference anything from the external TLB, the reference to the external TLB would be removed. So, beware, if you add a TLB on the Uses page, make sure you use something from that TLB or the reference will be gone the next time around. Thanks much to Chris Bensen for fixing it, and explaining this to me countless times.
As I alluded to in an earlier post
, I had troubles using the nz() function from a TADODataset in Delphi. Take the following SQL, which works just fine in Access itself:
SELECT DISTINCT ContestId, Hole, nz(Score,'MISSING')
This query behaves like isnull() in MSSQL where if the Score column is null, it returns the value 'MISSING' instead. If Score isn't null, then the value of Score would be returned. To me, this syntax is very intuitive, corresponds well with other DBMS' isnull() functions, and clearly captures the intent of what you're trying to do in once concise statement. However, if you put this query in a TADODataset and try to open that dataset, you will be greeted with the following error: "Undefined function 'nz' in expression.". If you need this type of substitution in Access when executing from Delphi, I found the simplest way to get around this is to use the iif() and isnull() functions. It's more verbose, and I don't like it as much, but when you need things to work, some times you have to live with things that aren't aesthetically pleasing. The SQL above translates to this:
SELECT DISTINCT ContestId, Hole, iif(isnull(Score),'MISSING', Score)
This post goes in the funny category. Not funny like a well told joke, but funny like strange. In Access, if you create a subquery that has parameters, e.g.
SELECT ContestID, count(*) AS MissingHoles
FROM (SELECT DISTINCT ContestId, Hole FROM tblContestDetails WHERE CourseId=[CourseId] and Score is null) AS qry
GROUP BY ContestID;
You can close, save, and execute this query just fine. However, when you go back in to edit the query, it now looks like this:
SELECT ContestID, count(*) AS MissingHoles
FROM [SELECT DISTINCT ContestId, Hole FROM tblContestDetails WHERE CourseId=[CourseId] and Score is null]. AS qry
GROUP BY ContestID;
Note that the subquery is now enclosed in square brackets instead of parenthesis. The only problem is that if you make a change to this new query (e.g. add a space somewhere) and try to save it, you will be presented with an error:
"Invalid bracketing of name 'SELECT DISTINCT ContestId, Hole FROM tblContestDetails WHERE CourseId=[CourseId'."
The solution is to put the parenthesis back around the subquery every time you want to save. :-(