Wednesday, February 13, 2013

Develop Native Apps in C# for Windows, iOS, and Android

Presentation tonight; details on Meetup. Here are some useful resource links:

Tuesday, March 3, 2009

ASP.NET Security Presentation

I'm presenting April 8 at the TriNUG in Raleigh/Durham, NC. I'll be talking about web security and showing sample code. Presentation and sample code can be found here.

Thursday, January 29, 2009

LINQ to SQL vs. LINQ to Entity Framework

No, this isn't a cage match. Sorry to those who expected otherwise.

What I do want to point out is that the programming models are different, and differ in ways that are going to make some people think that certain features are missing from one tool or the other. If you're one of those people who thinks that LINQ to SQL has a performance problem because of lazy loading, or that LINQ to Entity Framework doesn't give you a way to turn off change tracking for query-only usage, then I'm talking to you, friend!

For instance, let's look at that change tracking issue. Both LINQ to SQL and LINQ to EF will track changes to entities so that you can change data in an object and save the data simply by submitting changes through the DataContext or ObjectContext. But for query-only operations, this change tracking is unnecessary, and you can turn it off. Here's how you do it in LINQ to SQL:

public static List<Customer> GetCustomers()
{
using (NorthwindData dc = new NorthwindData())
{
dc.ObjectTrackingEnabled = false;
return (from c in dc.Customers select c).ToList();
}
}

And here's how you do it in LINQ to Entity Framework:

public static List<Customers> GetCustomers()
{
using (NorthwindEntities ne = new NorthwindEntities())
{
ne.Customers.MergeOption = MergeOption.NoTracking;
return (from c in ne.Customers select c).ToList();
}
}

Note the big difference here. In LINQ to SQL, you turn off tracking in the DataContext. In LINQ to EF, you turn off tracking in the entity collection. If you're looking in the wrong place, you'll miss it.

The same pattern holds true with eager loading and lazy loading. In LINQ to SQL, it's controlled with a DataLoadOptions object attached to the DataContext. In LINQ to EF, it's set at the entity level, either in the query (eager loading) or in processing the results (lazy loading).

I'll cover eager loading and lazy loading in a follow-up post, because there's a nasty surprise waiting in LINQ to SQL eager loading.

Sunday, January 25, 2009

Entity Framework and LINQ to SQL Performance

Updated: Modified embedded SQL queries to use parameters where appropriate. Results updated.

I've been playing with Entity Framework recently, and noticed that it seemed to be much slower than LINQ to SQL. I ran some tests, and sure enough, I was right. The numbers are interesting:

The code is available here if you want to run these tests yourself.

Methodology

The structure of the test was to set up a static method to return data from the Customers table of Northwind, suitable for binding to an ObjectDataSource in ASP.NET. I ran two sets of tests, one to return six columns from all rows, and one to return the same six columns from a single row. Each set contained the following variations:

  1. DataReader, to provide baseline performance to compare against other technologies.
  2. DataTable, using classic ADO.NET tools (DataAdapter running a command to fill a table).
  3. LINQ to SQL, using a compiled query, and with object tracking turned off, to maximize performance. The results list was projected directly from the query.
  4. LINQ to Entity Framework, using a compiled query to maximize performance. As with LINQ to SQL, the results list was projected directly from the query.
  5. Entity SQL, as an alternative to LINQ, querying the Entity Framework. The code structure for Entity SQL uses a reader, similar to using a DataReader with T-SQL.

For both LINQ to SQL and Entity Framework, I used the visual designer tools to include only the Customers table in the model.

The test measured elapsed time and total processor time. The difference could be assumed to include time used by SQL Server, as well as any other out-of-process time. I ran the tests on a Dell Latitude E6500 with Vista Ultimate, SQL Server 2008, an Intel Core 2 Duo P9500 (2.5 GHz), 4GB RAM, and 7200 RPM disk. The system was idle except for tests; test runs were fairly consistent in timings, as measured by standard deviations over a set of 10 test runs.

The test program ran each query once to ensure that all code was loaded and JITed, and all access plans and data were cached, so that startup time was excluded for each scenario. The program then ran 10,000 queries and collected aggregate time and working set information. For each scenario, the test program was run once, then run 10 times to record timing data.

Results

Keep in mind that the test was designed to measure only the code execution for queries. There is no business logic, and the test design ensured that start-up costs were excluded from test results.

As expected, using a DataReader with raw T-SQL is the best performer, and the technology of choice for extremely large data volumes and for applications where performance is the only thing that matters. The DataReader used .40 milliseconds (elapsed) to retrieve 92 rows and store the data in a list, and only .15 milliseconds for a single row.

The DataTable with classic ADO.NET performed almost as well, using .58 milliseconds (elapsed) for 92 rows and .18 milliseconds for a single row. In the chart above, the DataReader is used as a baseline for comparison, so the relative cost of using a DataTable and DataAdapter was 1.4 for 92 rows, and 1.2 for a single row. That's not a lot of overhead in exchange for using a standardized structure that includes metadata on names and data types. Memory usage was virtually identical to memory usage for the DataReader.

LINQ to SQL also performed very well, using .63 milliseconds (elapsed) for 92 rows and .36 milliseconds for a single row. The performance ratio compared to the DataReader is 1.6 for 92 rows and 2.3 for a single row. Compared to the DataTable, the performance ratio (not charted) was 1.2 for 92 rows and 1.9 for a single row. LINQ to SQL used 40 MB additional memory, based on the final working set size at the end of each run.

That's very decent performance, considering the  additional overhead, although Rico Mariani of Microsoft got even better numbers (and I'd love to know how to get closer to those results). In my tests, all queries established new connection objects (or data contexts) for each query, but I can't tell if Rico did the same in his performance tests. This may account for the difference in performance.

With Entity Framework, I found significant additional performance costs. LINQ to EF used 2.73 milliseconds (elapsed) to retrieve 92 rows, and 2.43 milliseconds for a single row. For 92 rows, that's a performance ratio of 6.8 compared to the DataReader, 4.7 compared to the DataTable, and 4.4 compared to LINQ to SQL (the latter two are not charted above). For a single row, LINQ to EF used 2.43 millisecond (elapsed), with performance ratios of 16.0 compared to the DataReader, 13.2 compared to the DataTable, and 6.8 compared to LINQ to SQL. Memory usage for LINQ to EF was about 130 MB more than for the DataReader.

Entity SQL queries to EF performed about the same as LINQ to EF, with 2.78 milliseconds (elapsed) for 92 rows and 2.32 milliseconds for a single row. Memory usage was similar to LINQ to EF.

Conclusions

Some of the conclusions are obvious. If performance is paramount, go with a DataReader! Entity Framework uses two layers of object mapping (compared to a single layer in LINQ to SQL), and the additional mapping has performance costs. At least in EF version 1, application designers should choose Entity Framework only if the modeling and ORM mapping capabilities can justify that cost.

In between those extremes, the real surprise is that LINQ to SQL can perform so well. (The caveat is that tuning LINQ to SQL is not always straight-forward.) The advantage that LINQ (including LINQ to EF) offers is in code quality, resulting from two key improvements over classic ADO.NET:

  1. Names and data types are strongly enforced from top to bottom of your application. Very specifically, that means all the way down to the tables in the database. LINQ uses .NET types, further simplifying the developer's life.
  2. DataTables and DataSets bring the relational data model rather intrusively into the code. To process data in a DataTable, you must adapt your code to the DataTable structure, including DataRows, DataColumns, and (with DataSets and multiple tables) DataRelationships. By contrast, LINQ is a first-class language component in .NET, with object-relational mapping inherent in the LINQ data providers and models. Processing data with LINQ feels like processing object collections, because that's exactly what you're doing.

So for now, LINQ to SQL is a winner! As Entity Framework version 2 takes shape, it will be time to re-evaluate.

Edited to fix math errors (blush). These affect timings only, but since the chart is based on ratios, those numbers are still correct.

Sunday, January 18, 2009

Triad SQL Server Group

I'll be speaking at the Triad SQL Server Group on Tuesday, February 17, at 6:00 pm. The topic is "The LINQ Revolution":

Microsoft's inclusion of query capabilities as a first-class language component in .NET, along with two object-relational mapping (ORM) solutions in LINQ to SQL and the Entity Framework, will change the ways that you develop database applications. This will be an open discussion, where you choose the topics of greatest interest, including anything from LINQ syntax and code generation to ORM, domain-driven design, n-tiered design issues as they related to LINQ and ORM, the changing role of stored procedures with ORM, and entity-relationship modeling now that E-R models can be represented directly in code.

Presentation materials and sample code are here.

If you're in the area, I hope to see you there!

ETA: Date changed again. This time we won't let it snow that day!

Wednesday, January 7, 2009

LINQ of the Day

One of the delights of LINQ to SQL is that it's an extension of LINQ to objects, but it doesn't implement everything that LINQ to objects implements. The constraint on LINQ to SQL is that the expression must be translatable into T-SQL. If you write LINQ expressions that don't translate to T-SQL, your code will compile cleanly, but will fail when LINQ is ready to do the translation. (The exception and call stack can best be described as "educational.") Let's look at an example that fixes the problem.

public static List<MultiplierDefinition> GetAllMultipliers()
{
using (FSP dc = FSP.DC())
{
return GetMultipliers(dc)
.AsEnumerable()
.OrderBy(m => m, new MultiplierDescComparer<MultiplierDefinition>())
.ToList();
}
}



Before tearing into this, let's take a quick look at GetMultpliers, which is a compiled LINQ-to-SQL query. There's nothing strange about it; I write stuff like this routinely. The thing to note is that it returns an IQueryable<MultiplierDefinition>.



private static Func<FSP, IQueryable<MultiplierDefinition>> GetMultipliers = 
CompiledQuery.Compile(
(FSP dc) => from m in dc.Pm_Multipliers
where m.RecordDeleted != true
select new MultiplierDefinition
{
MultiplierID = m.MultiplierID,
Description = m.Description,
DiscountPercent = m.DiscountPercent,
CreatedByUser = m.CreatedByUser,
DateCreated = m.DateCreated,
ModifiedByUser = m.ModifiedByUser,
DateModified = m.DateModified,
RecordDeleted = m.RecordDeleted
});



So my method GetAllMultipliers() starts by getting a DataContext, and calling a method to get a LINQ-to-SQL expression tree. But the next thing I need to do is sort the data, using a custom sorting algorithm contained in an IComparer<T> object called MultiplierDescComparer<MultiplierDefinition>. There's no way that T-SQL can sort using a C# object! Once the sorting is done, I simply return a generic List<MultiplierDefinition> to my client code.



There are two secrets to making this work. One is that you can add LINQ expressions to a compiled query; they'll be added to the expression tree before the query runs. The second secret, and the important one, is that innocuous little method, .AsEnumerable(). Hey! IQueryable<T> implements IEnumerable<T>, so we're already enumerable. Why do we have to this explicit... Conversion? Reminder? Expositional countersinking? No-op?



It's a conversion, and here's what it means. Everything before .AsEnumerable() will translate to T-SQL and run in the database. Everything after .AsEnumerable() is a LINQ to objects expression, and will run on the data returned from the database. (That's always a core difference between IQueryable<T> and IEnumerable<T>, and that's why you don't want to store a LINQ-to-SQL query as an IEnumerable<T>, unless you want to force any subsequent LINQ expressions to be run in memory.)



None of this forces immediate execution; it's all still deferred execution, right up to the .ToList() method. Evaluating and understanding the entire code and execution path is a bit convoluted, but definitely enlightening. .AsEnumerable() is always useful when you want to use LINQ that doesn't have a counterpart in T-SQL (for example, .SkipWhile() and .TakeWhile()).

Thursday, January 1, 2009

Happy New Year

A small reminder that even in a rough year, some things remain beautiful.

Also noting that my job has been extended through almost the end of January. So that's a good start.

Tuesday, December 16, 2008

LINQ of the Day

Today's LINQ covers a several points: Providing an object data source, suitable for binding. Making a query reusable, and pre-compiling it for best performance. Selecting useful objects.

Here's the code:

public static List<IndependentRep> IndependentRepList()
{
using (FSP dc = FSP.DC())
{
return GetIndependentRepQuery(dc).ToList();
}
}

private static Func<FSP, IQueryable<IndependentRep>>
GetIndependentRepQuery = CompiledQuery.Compile(
(FSP dc) => from u in dc.Pm_Logins
where (!u.RecordDeleted) &&
(u.LoginType == Util.c_Login_IndependentSalesRep)
select new IndependentRep
{
UserName = u.UserName,
RepContact = new Contact
{
FirstName = u.FirstName,
LastName = u.LastName,
OfficePhone = u.OfficePhone,
MobilePhone = u.MobilePhone,
Email = u.Email,
CompanyName = u.CompanyName
},
Status = u.Status
});



Providing an Object Data Source



You'll notice that there are two static functions here, a public static function that calls a private static function. The public function has no arguments, so it's ideal for binding with an object data source. It could pull parameters for session data or the application cache, if the scope of the data needed to be constrained based on context, but we're not doing that here. All we do is grab a data context and call the private function, then return the results as a list.



Reusable Pre-Compiled Queries



This might look a bit intimidating at first, but it's well worth the effort to learn and use. The CompiledQuery.Compile method provides a way to compile a LINQ query and cache the expression tree, so that it can be re-used without the overhead of re-evaluating the query each time. The query is compiled on first use, and (as far as I know) has the lifetime of the AppDomain.



The Compile method is overloaded to allow you to pass zero to three arguments to the query. We're not passing any arguments to the query itself in this case, but if we were, they would come between the first and last arguments listed in "Func". "Func" is a generic delegate template, so at this point we're specifying types. The first type is always the data context, and the last argument is always the return type of the query. By leaving the return type as IQueryable<T>, callers can always add more LINQ methods to further refine the query before it goes to the database. (And remember, we're doing the ultimate in deferred query execution here; the query never gets executed inside this function.)



CompiledQuery.Compile actually takes only one argument, and that's a lambda expression that defines the query itself. The arguments to the lambda expression are all of the types listed in "Func" except the return type. In defining the arguments for the lambda expression, we're providing both types and parameter names, and the parameter names are used inside the query. To the right of the lambda operator ("=>"), we have a normal LINQ query, with the exception that you must return a named class type that matches the return type in "Func". That class can be either an entity known to your data context, or it can be a class defined in your application.



Selecting Useful Objects



Look at the select clause in the query. Select always returns one object, in this case a new instance of type IndependentRep. IndependentRep is an application-defined class with three members:



public class IndependentRep
{
public string UserName
{ get; set; }
public Contact RepContact
{ get; set; }
public string Status
{ get; set; }
}



As we initialize the object in the select clause, we can assign values to UserName and Status directly. We can also create a new instance of a Contact class, and populate that as well with data from the query. In this case, we're generating a simple class for use in databinding, but you could also have the select clause generate a pre-populated business entity, a class with its own business rules and perhaps its own persistence and update methods.



Putting LINQ into production means being comfortable with selecting from the many features of LINQ, and combining these into coherent and useful units of work. This is a real-life example.

Friday, December 12, 2008

LINQ of the Day

Here's an interesting problem. A table has multiple foreign key references to another table. Each foreign key reference has a different meaning, so they're unique. I need a list of the items referenced, with context information so that I know the usage of the reference. If Table One references Table Two twice in the same row, that's two distinct usages, and should generate two separate records for output.

That's just enough difference to make life interesting, because the output needs information from both tables as well as a constant identifying context and usage. In T-SQL, this is a fairly simple set of four joins, combined with a union so we get all our output in one result set.

Here's the LINQ to do this:

IQueryable<MultiplierCustomer> StdCustomers =
dc.Pm_VisualCustMultipliers
.Where(vcm => vcm.RecordDeleted == false)
.Where(vc => vc.StandardMultiplier == multiplierID)
.Join(dc.Vm_Customers,
(vcm => vcm.VisualCustomerID),
(vc => vc.ID),
((vcm, vc) => new MultiplierCustomer
{
VisualCustomerID = vcm.VisualCustomerID,
CompanyName = vc.NAME,
Brand = vcm.Website,
MultiplierType = "Standard"
}));
IQueryable<MultiplierCustomer> QSCustomers =
dc.Pm_VisualCustMultipliers
.Where(vcm => vcm.RecordDeleted == false)
.Where(vc => vc.QSMultiplier == multiplierID)
.Join(dc.Vm_Customers,
(vcm => vcm.VisualCustomerID),
(vc => vc.ID),
((vcm, vc) => new MultiplierCustomer
{
VisualCustomerID = vcm.VisualCustomerID,
CompanyName = vc.NAME,
Brand = vcm.Website,
MultiplierType = "QuickShip"
}));
IQueryable<MultiplierCustomer> PartsCustomers =
dc.Pm_VisualCustMultipliers
.Where(vcm => vcm.RecordDeleted == false)
.Where(vc => vc.PartsMultiplier == multiplierID)
.Join(dc.Vm_Customers,
(vcm => vcm.VisualCustomerID),
(vc => vc.ID),
((vcm, vc) => new MultiplierCustomer
{
VisualCustomerID = vcm.VisualCustomerID,
CompanyName = vc.NAME,
Brand = vcm.Website,
MultiplierType = "Parts"
}));
IQueryable<MultiplierCustomer> StdBreakCustomers =
dc.Pm_VisualCustMultipliers
.Where(vcm => vcm.RecordDeleted == false)
.Where(vc => vc.StdBreakMultiplier == multiplierID)
.Join(dc.Vm_Customers,
(vcm => vcm.VisualCustomerID),
(vc => vc.ID),
((vcm, vc) => new MultiplierCustomer
{
VisualCustomerID = vcm.VisualCustomerID,
CompanyName = vc.NAME,
Brand = vcm.Website,
MultiplierType = "Standard Break"
}));
List<MultiplierCustomer> usageList =
StdCustomers
.Concat(QSCustomers)
.Concat(PartsCustomers)
.Concat(StdBreakCustomers)
.ToList();



What's neat is that it really does build the SQL that you'd want, doing a UNION ALL on the individual LINQ queries, so that you send one query to the database. Here's the generated SQL:



SELECT [t10].[VisualCustomerID], [t10].[NAME] AS [CompanyName], [t10].[Website] AS [Brand], [t10].[value] AS [MultiplierType]
FROM (
SELECT [t7].[VisualCustomerID], [t7].[NAME], [t7].[Website], [t7].[value]
FROM (
SELECT [t4].[VisualCustomerID], [t4].[NAME], [t4].[Website], [t4].[value]
FROM (
SELECT [t0].[VisualCustomerID], [t1].[NAME], [t0].[Website], @p1 AS [value]
FROM [dbo].[pm_VisualCustMultipliers] AS [t0]
INNER JOIN [dbo].[vm_Customers] AS [t1] ON [t0].[VisualCustomerID] = [t1].[ID]
WHERE ([t0].[StandardMultiplier] = @p0) AND (NOT ([t0].[RecordDeleted] = 1))
UNION ALL
SELECT [t2].[VisualCustomerID], [t3].[NAME], [t2].[Website], @p3 AS [value]
FROM [dbo].[pm_VisualCustMultipliers] AS [t2]
INNER JOIN [dbo].[vm_Customers] AS [t3] ON [t2].[VisualCustomerID] = [t3].[ID]
WHERE ([t2].[QSMultiplier] = @p2) AND (NOT ([t2].[RecordDeleted] = 1))
) AS [t4]
UNION ALL
SELECT [t5].[VisualCustomerID], [t6].[NAME], [t5].[Website], @p5 AS [value]
FROM [dbo].[pm_VisualCustMultipliers] AS [t5]
INNER JOIN [dbo].[vm_Customers] AS [t6] ON [t5].[VisualCustomerID] = [t6].[ID]
WHERE ([t5].[PartsMultiplier] = @p4) AND (NOT ([t5].[RecordDeleted] = 1))
) AS [t7]
UNION ALL
SELECT [t8].[VisualCustomerID], [t9].[NAME], [t8].[Website], @p7 AS [value]
FROM [dbo].[pm_VisualCustMultipliers] AS [t8]
INNER JOIN [dbo].[vm_Customers] AS [t9] ON [t8].[VisualCustomerID] = [t9].[ID]
WHERE ([t8].[StdBreakMultiplier] = @p6) AND (NOT ([t8].[RecordDeleted] = 1))
) AS [t10]
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p1: Input NVarChar (Size = 8; Prec = 0; Scale = 0) [Standard]
-- @p2: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p3: Input NVarChar (Size = 9; Prec = 0; Scale = 0) [QuickShip]
-- @p4: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p5: Input NVarChar (Size = 5; Prec = 0; Scale = 0) [Parts]
-- @p6: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p7: Input NVarChar (Size = 14; Prec = 0; Scale = 0) [Standard Break]
-- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1



Proving, once again, that there's no substitute for checking the generated SQL when you're doing something twitchy with LINQ.

Monday, December 8, 2008

It Ain't Rocket Surgery

Or is it? And why should the rocket need surgery, anyway? And don't we all just love debugging stuff that fails intermittently, but always works correctly on our own machines? So it happened with WCF-based services for AJAX recently.

This wasn't my first entanglement with ASP.NET's temporary files. The clue comes in various forms: You get a build error for a file that's right in front of you, and which compiles cleanly. You get a run-time error that says ASP.NET couldn't find a file that you know is there. Except that it has a funny name, like "App_Web_zxemnnhw.5.cs". That's an ASP.NET temporary file, and you'll find them in places like C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\breidert web\6d823db7\f71b84dd.

The problem is that ASP.NET decides that it can leave some source code to be compiled-on-demand, even at run time. It doesn't seem to affect code-behinds, or code (anywhere in the web project or web application) that is called directly from code-behinds. The issue sneaks in when code in the web application project is only referenced in web.config - things like providers, HTTP modules, or services. This time, the nasty pointy snarly teeth belonged to a WCF service. (Check it out; you can embed WCF services directly in your web application. Right-click on the project or on a folder inside the project, click "Add -> New Item...", and add a web service or a WCF service. This is in VS 2008, where all web apps are AJAX-enabled.)

What you get is a pair of files, one called "MyServiceThing.svc" and a code-behind, "MyServiceThing.svc.cs". You also get some new references and a <system.serviceModel> section in web.config that contains behaviors and bindings for your WCF service. (To use your new service, you'll need to code a service reference in your ScriptManager tag for ASP.NET AJAX.)

And there the problem begins, because there is no direct call or reference to your service code in your C# (or VB) code. ASP.NET figures that it can stash this code in its temporary files, and compile on demand. But wait! There's more! This is a development or staging machine, and you're going to publish to a web server, and maybe copy from there to a production machine. That's where the ASP.NET temporary files get lost, because they don't seem to tag along with the publishing and deployment process. (Note that sometimes the problem doesn't even take this much effort to throw errors in your face. Gotta love it when a project builds cleanly but publishes with errors.)

When you encounter problems with ASP.NET temporary files, there's a simple solution: Move the code to a separate project, and reference the project in your web app.

For WCF services, it gets a little trickier because of the interplay between hosting and ASP.NET AJAX. You need that .svc file, and it needs to stay in your web project. That is specifically an ASP.NET web "page", and it includes an ASP.NET declaration:

<%@ ServiceHost Language="C#" Debug="true" 
Service="NorthwindLINQ.MyServiceThing"
CodeBehind="MyServiceThing.svc.cs" %>


So to move the WCF service out of the web app, you only want to move the code-behind to a new project. Leave the .svc file where it is, delete the CodeBehind reference and file, and make sure the Service reference is the fully qualified class name of the service.


<%@ ServiceHost Language="C#" Debug="true" 
Service="NorthwindLINQ.Services.MyServiceThing" %>


Note that this declaration type is "ServiceHost". This is how WCF services get hosted in an ASP.NET application. Hosting is a critical aspect of WCF, and ASP.NET pretty much takes care of that for you. The constraint is that you're not building a general-purpose service that anyone can call; it's going to be restricted to your web app. On the other hand, it goes through the full ASP.NET pipeline, so it has access to authentication and authorization status, session data, etc.

Also, make sure your ScriptManager tag points to the .svc file:


<asp:ScriptManager  ID="ScriptManager" 
EnablePageMethods="true"
runat="server" EnablePartialRendering="true">
<Scripts>
<asp:ScriptReference Path="~/jscripts/myStuff.js" />
</Scripts>
<Services>
<asp:ServiceReference Path="~/someFolder/MyServiceThing.svc" />
</Services>
</asp:ScriptManager>


Check your web.config; you may not have to change the serviceModel there, but it's good to verify rather than trust. Things to watch for include the fully qualified class (service) name, and the reference to the service contract. Yes, it's OK for the address to be an empty string; ASP.NET takes care of that for you. ASP.NET AJAX supports only webHttpBinding, and you don't need a metadata binding.


<system.serviceModel>
<behaviors>
<endpointBehaviors>
<behavior name="NorthwindLINQ.Services.NorthwindLINQAspNetAjaxBehavior">
<enableWebScript />
</behavior>
</endpointBehaviors>
</behaviors>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
<services>
<service name="NorthwindLINQ.Services.MyServiceThing">
<endpoint address=""
behaviorConfiguration="NorthwindLINQ.Services.NorthwindLINQAspNetAjaxBehavior"
binding="webHttpBinding"
contract="NorthwindLINQ.Services.IMyServiceThing" />
</service>
</services>
</system.serviceModel>


Next comes the question of adding services in a separate project. WCF has its own way of doing this, and it's not really what we want. The project can be a normal class library. When you add a web service to a class library, Visual Studio creates an interface file and a class file (but no .svc file, since that's specific to ASP.NET). It also creates an app.config and puts the WCF binding in it. Get rid of the app.config file, since the bindings you want are already in web.config, and you don't want any other bindings making the service available to any other callers and/or hackers.

When you add a WCF service to ASP.NET, you don't get an interface file for the service contract. When you add a WCF service to a class library, you do get the interface file. I like interface files for service contracts. (You'll have to modify the contract name in web.config to point to the interface.) However you do it, though, you will probably want to modify the contract attributes. Here's a sample interface with service contract:


namespace NorthwindLINQ.Services
{
[ServiceContract(Namespace = "NorthwindLINQ.Web", Name = "ThingService")]
public interface IMyServiceThing
{
[OperationContract]
string DoWork(string thing);
}
}


And here's the associated class file:


namespace NorthwindLINQ.Services
{
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class MyServiceThing : IMyServiceThing
{
public string DoWork(string thing)
{
try
{
if (string.isNullOrEmpty(thing))
{
throw new Exception("Service tantrum");
}
return thing + " : Done!";
}
catch (Exception ex)
{
throw new Exception(string.Format("{0}: {1}", ex.GetType(), ex.Message));
}
}
}
}


The namespace and name parameters on the service contract are important; these are the namespace and class names that ASP.NET AJAX will use to construct a proxy for calling your service. The AspNetCompatibilityRequirements attribute is also important so that WCF and ASP.NET AJAX will work smoothly together.

Calling your service from Javascript is easy, finally:


function PageLoad()
{
NorthwindLINQ.Web.ThingService.DoWork("whatever", onCompleted, onError);
}
function onCompleted(results, context, methodName) {
alert(methodName + " : " + results);
}
function onError(errorInfo, context, methodName) {
alert(methodName + " : " + errorInfo);
}


onCompleted and onError will have either your results, or the error info, respectively. Note that you call the service using the namespace and class specified in the service contract.

And that's the rocket surgery to fix the ASP.NET temporary files problem for WCF services.

Update:You know what? It still doesn't fix everything! The .svc file that remains in the web project is still subject to ASP.NET temporary file madness.

So if you have temporary file problems, there's always "Clean Solution" followed by "Rebuild Solution." If that fails, then next time I'm calling a real rocket surgeon.

Friday, November 28, 2008

Barkburn

Many years ago, I attended an excellent course on project leadership. They had a term called "barkburn." You know the people who can't see the forest for the trees? People with barkburn can't see the tree because their face is buried in the bark.

The other night, after a .NET SIG meeting, we got in a discussion of testing, and particularly the relationship between unit testing (including Test-Driven Design) versus the kind of large-scale architectural and design issues that tend to interest me. And of course, quality issues always bring up the space shuttle, where many software engineering and quality practices originated. In email after the discussion, someone referenced Unit Testing, TDD and the Shuttle Disaster.

A couple of thoughts on the shuttle….

Unit testing is certainly not a new idea. TDD is just a another  way of doing it. And it is certainly true that hardware gets unit and component testing as well as integration testing and component testing. Nothing at all new about this. And test specs getting written before development isn’t new, either in software or hardware. Some hardware units in any design and early production are designated specifically for testing, often testing to destruction. There’s rarely anything new under the sun.

NASA’s history prior to the shuttle included the Saturn V rocket, which was developed on a very accelerated schedule. Part of that schedule acceleration included grouping of component and integration tests with system test – first test of some components and integration consisted of flying the first rocket. They called it “all up” testing. And it worked reasonably well; the first manned Saturn V flight was only the third flight of the rocket, and there were no failures in the entire life of the Saturn V. There was only one serious problem that occurred in any flights, a “pogo” vibration in the second stage that occurred in Apollo 6 (unmanned) and Apollo 13. Apollo 12’s Saturn V survived a lightning strike during liftoff.

Compare that to the shuttle. Both Challenger and Columbia failures shared a common root cause: stacking the shuttle side-by-side with its boosters and fuel tank, instead of the usual vertical stack. The Challenger break-up occurred because of asymmetric aerodynamic forces, due to trying to fly sideways as the boosters and fuel tank came apart. While an o-ring failure on a vertical stack would also have probably lost the launch stage, an Apollo-type design would have left the crew vehicle on-axis with the booster thrust changes, and the Apollo (as well as Mercury and Gemini) had an escape rocket to pull the crew vehicle away from a failing booster. The o-ring failure might well have been survivable in a vertical stack.

As for Columbia…. Ice shedding during launch is utterly routine. Ice builds up on the storage tanks for any rocket fueled by liquid oxygen. The foam was there only to protect the shuttle in a side-by-side stack. In a vertical stack, this wouldn’t have been a failure mode at all; it would have been just another routine launch.

These are design failures, and you can’t unit-test your way out of design flaws. As the Quality Assurance professionals have known for a long time, you can’t test quality into a product. That’s something the software industry tends to forget, on a fairly regular schedule.

Ahhhh, time to get to work.

LINQ of the Day

public static bool CustomerHasMultiplier(string ID)
{
return FSP.DC().Pm_VisualCustMultipliers
.Where(m => m.Status == "Active")
.Where(m => m.RecordDeleted == false)
.Any(m => m.VisualCustomerID == ID);
}
All I want to do is knock on the database's door and ask, "Is anybody home?" With SQL, that means setting up a connection, a command, a reader or a data adapter and a dataset, running a query to do a count(*) or at least check to see if any rows were returned. With LINQ, it's a one-statement function that returns a bool.