I've written about how I think that web applications are a hack. In that post I didn't think Silverlight was fully the answer, but at this point I think it's the best alternative.
I couldn't be more strongly in favor of using Silverlight for web applications (as opposed to web sites which only provide content) wherever possible. Of course even content can be a good use of Silverlight as the NYT reader shows. If an MS shop's target audience doesn't include IT goons who somehow put Silverlight in a different category than Acrobat Reader, I don't think you have any other choice for new applications.
My coworkers are spending many hours trying to make one AJAX codebase work on different browsers. After awhile it just gets demoralizing (and my employer is wasting tons of maintenance time). My mind is boggled by all the hackage necessary to make AJAX popup "windows" drag/drop via HTML (sure Javascript makes it happen, but it's still just HTML). With Silverlight you're working on a platform designed for this kind of thing. It's designed for dragging and dropping and animations and all the other things needed for applications (not to mention streaming).
Yes, you can make all those things happen with Javascript, and I fully understand the power of functional languages, but who would rather maintain a line of business application written in Javascript in favor of C#? Not to mention the aforementioned ever growing browser compatibility issues.
How do loosely typed Javascript objects compare with full featured Silverlight controls with UI and logic contained in one distinct entity combined with design time configuration? Not very well on the maintainability scale.
That's how I cast my vote, anyway. Long live Silverlight. I doubt it will be a technology that fades away, but I hope it's user base grows and it continues to stay healthy in the future.
Friday, January 30, 2009
SSIS Gotchas
SQL Server Integration Services is very finicky (read “gives indecipherable errors and will make your life miserable”). I wrote some packages using VS2008 / SQL Server 2008. Following is a list of gotchas I ran across. Note that there’s no frigging way you can figure any of these out from the error messages alone:
- Restoring a database renders connections in packages unusable (even though everything looks good and testing the connection is fine). Thus, every time you restore a database that a package is pointing to (which I was doing repeatedly during testing), you must go into the package, add a new connection from a data source (by right clicking in the connection manager area of each package) and add in the same source that’s already being used (giving it a different name). Then you must go into each task item in the package and point to the new connection. If that task had mappings (as from a flat file to a table), you must re-map everything. Enjoy!
- If you’re executing a package from .Net code, only the .Net connection types will run.
- There are 3 different forms of essentially the same connection type: “Native OLE DB”, “.Net providers for OleDb”, and “.Net Providers”
- The “.Net Providers” didn’t seem to work at all; the “Native OLE DB” providers would not execute when attempting to run the packages from a web page on a different server; both the “Native OLE DB” and “.Net providers for OleDb” worked from my local machine.
- It seems the proper choice for calling packages from code is “.Net providers for OleDb”.
- The “.Net providers for OleDb” will not execute from code directly from within visual studio (i.e., you can’t right click on the package and click execute) so you can't debug directly.
- The .Net connections ignore their timeout property. They simply won't work when running a long query.
- Enjoy!
- You can't include a package that deals with a table that doesn't yet exist.
- So you can’t have a package that refers to a table that hasn’t yet been created. The package will give errors all over the place. So, for instance (IIRC), you can’t deploy a package that refers to a table that will be created by another package b/c it won’t validate on deployment.
- Deployment of packages is a nightmare. There are myriad issues surrounding the demonic property of packages, “ProtectionLevel”.
- Under most circumstances, you would want to make your connection string properties configurable at the server. That way, you can move the packages around or point them to different servers w/o having to redeploy them. In my case, I had no end of problems with this (and the packages need only run annually) so I just deployed and ran this way:
- For deployment, I would just do file | Save copy of [package] as and copy the package to the server running integration services. This would just embed the connection settings and not rely on any package configuration files. Then, for executing the packages from code, I would just refer to the server I copied the packages to.
- At one point, I had a package that had a bad path in one of its connection’s properties. This seemed to prevent all packages from working.
- Most destructive: Changing the connections and re-deploying (as described above) doesn't overwrite the connections. So I changed the connections from production back to QA, verified everything multiple times, redeployed (which prompts for confirmation whether I want to overwrite the existing packages), and ran a package. Much to my surprise & chagrin, the package executed against production! In this case, the existing packages must be deleted. In Reporting Services, there is some “overwrite data sources” property, but I couldn’t find this in SSIS.
- Another disturbing issue: packages run where they’re called from. If, in code, you execute some SQL against a particular server, that SQL will execute on the server. Confusingly (b/c I was importing and exporting data), when you execute a package, it executes on the machine where it was called, not on the server where the package exists!
- Warnings on package objects show up all over the place. There's no figuring out why they don't consistently work. E.G., login failure when there is nothing wrong with the connection. I think setting the property “BypassPrepare” to true helped.
- Unlike the predecessor technology, DTS, there is no way to switch a connection vis-a-vis data flow objects w/o losing mappings. Enjoy!
- Names of all new packages are Package1 – you must rename them and delete file Package1 manually.
- If you have an extra line break at the end of a script task, it crashes. You can copy/paste that script into management studio and it will work fine.
- Closing / reopening the project sometimes causes connection references from tasks to be lost.
- Yet another disturbing issue: I was not able to get it down to a reproducible set of steps, but the project was reverting my script changes. I found that I had to check my script changes into sourcesafe immediately after saving every time (I suspect that the reverting took place somewhere in deploying).
Thursday, January 08, 2009
Best way to get record count using ADO.Net?
I think this may apply to the entity framework, but I'm not sure.
I was modifying a stored procedure which needed to run a query and also return the row count of that query. So the same (fairly long) query was pasted again using select count instead of select by which it set the value of an out parameter. I didn't like this for a few reasons:
So I looked at the properties of the SqlDataReader. It has a "RecordsAffected" property. Sounds a lot like "rows affected" no? Well, in order to get that count you must iterate through all the rows in the reader and close it. Obviously, not what I'm looking for.
Next I looked at the SqlCommand object. When you call ExecuteNonQuery, you get records affected! Cool, must be what I'm looking for. Nope - when this object says "records affected" it means just that (unlike what SqlServer means by "rows affected"). It will only return the count of rows updated, deleted, inserted.
So what seems to be the best way is to include a secondary recordset in the stored procedure: "select @@rowcount." My coworker verified that this count is connection specific (you don't get someone else's count). It's the count that management studio displays.
Accessing this count from a reader goes like yay:
_reader.NextResult();
_reader.Read();
_reader[0];
The call to NextResult advances the reader to the next result set (the set that only includes the result of "select @@rowcount"). We know that result set contains only the count so we don't need any explicit column names.
I was modifying a stored procedure which needed to run a query and also return the row count of that query. So the same (fairly long) query was pasted again using select count instead of select by which it set the value of an out parameter. I didn't like this for a few reasons:
- Dual maintenance of the query.
- The stored procedure is twice as long.
- Inefficiency of calling the query twice. The count is already known by SqlServer after running the first query.
- Having another parameter to configure and hard-code the name of in the calling code.
So I looked at the properties of the SqlDataReader. It has a "RecordsAffected" property. Sounds a lot like "rows affected" no? Well, in order to get that count you must iterate through all the rows in the reader and close it. Obviously, not what I'm looking for.
Next I looked at the SqlCommand object. When you call ExecuteNonQuery, you get records affected! Cool, must be what I'm looking for. Nope - when this object says "records affected" it means just that (unlike what SqlServer means by "rows affected"). It will only return the count of rows updated, deleted, inserted.
So what seems to be the best way is to include a secondary recordset in the stored procedure: "select @@rowcount." My coworker verified that this count is connection specific (you don't get someone else's count). It's the count that management studio displays.
Accessing this count from a reader goes like yay:
_reader.NextResult();
_reader.Read();
_reader[0];
The call to NextResult advances the reader to the next result set (the set that only includes the result of "select @@rowcount"). We know that result set contains only the count so we don't need any explicit column names.
Subscribe to:
Posts (Atom)