Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
by Mathias 5. April 2010 11:59

A few weeks back, Michael asked an interesting question in a comment: how do you go about unit testing a VSTO project? One of the reasons I prefer working with VSTO over VBA, is that it makes it possible to write automated tests. What I realized with this question, though, is that I unit test heavily the .Net functionality of my add-ins, but not much (if at all) the interaction with Excel.

Note: I am aware of the existence of a VBA unit testing solution, xlUnit; I found the project conceptually pretty cool, but from a practical standpoint, it doesn’t seem nearly as convenient as NUnit or the other established frameworks, which isn’t much of a surprise, given the maturity of unit testing in the .Net world.

The reason for this is double. First, most of my VSTO projects focus on generating heavy computation outside of Excel, and writing results to Excel; as a result, the meat of the logic has little to do with Excel, and there isn’t all that much to test there.

Then, testing against VSTO is a bit of a pain. By definition, a VSTO project comes attached with a giant external dependency to Excel, which we have limited control over, and which is also rather unpleasant to deal with from .Net. To illustrate one aspect of the issue, let’s consider this code snippet:

[TestFixture]
public class TestsThisAddIn
{
   [Test]
   public void WeCannotInstantiateTheAddInProperly()
   {
      var addIn = new ThisAddIn();
      var excel = addIn.Application;
      Assert.IsNotNull(excel);
   }
}

This test will fail: if we instantiate the add-in directly, it does not automatically hook up to Excel. The VSTO add-in is started up by Excel itself, and we cannot replicate that simply in our test code, and access the Excel object to verify that things behave as expected.

So how could we approach the problem? Unit testing our code means that we want to validate that pieces under our control (classes we wrote) work properly; the challenge is that some of them interact with Excel. We are not concerned with testing the system in its entirety (add-in code + Excel) here, which is an important issue, but not a unit-testing one.

The words “unit test” and “external dependency” together suggest one technique – Mocking. In a nutshell, Mocking consists of replacing the external dependency with a fake, an object which behaves the same way as the “real thing”, but is easier to work with.

There are three ways our classes can interact with Excel that I can think of:

  • react to Excel events
  • read/query from Excel
  • write/command to Excel

Let’s consider the first case, through a contrived/simplified example. Suppose that our add-in keeps track of the names of workbooks the user recently interacted with, through the following class:

using System.Collections.Generic;
using Excel = Microsoft.Office.Interop.Excel;

public class RecentWorkbooks
{
   private List<string> recentWorkbooks;

   public RecentWorkbooks(Excel.AppEvents_Event excelEvent)
   {
      this.recentWorkbooks=new List<string>();
      excelEvent.NewWorkbook += NewWorkbook;
      excelEvent.WorkbookOpen += NewWorkbook;
   }

   public List<string> RecentWorkbookNames
   {
      get
      {
         return new List<string>(this.recentWorkbooks);
      }
   }

   private void NewWorkbook(Excel.Workbook workbook)
   {
      this.recentWorkbooks.Add(workbook.Name);
   }
}

The class maintains a list of strings, the workbook names. In the constructor, it subscribes to the NewWorkbook and WorkbookOpen Excel events, which are directed to the NewWorkbook method, where the name of the workbook newly created or just opened is appended to the list.

The class is plugged in the add-in through the ThisAddIn_Startup method:

public partial class ThisAddIn
{
  public RecentWorkbooks RecentWorkbooks
  {
     get;
     set;
  }

  private void ThisAddIn_Startup(object sender, System.EventArgs e)
  {
     var recentWorkbooks = new RecentWorkbooks(this.Application);
     this.RecentWorkbooks = recentWorkbooks;
  }
}

How can we unit test this? There isn’t much to test on the ThisAddIn itself; the part which we would like to test is that when Excel fires NewWorkbook or WorkbookOpen, the name of the new workbook gets appended to the RecentWorkbooks list.

Opening an instance of Excel programmatically and passing it to the RecentWorkbooks class would be a total nightmare. Fortunately, the Excel object model as seen through the Office.Interop / VSTO consists almost entirely of Interfaces: as a result, we can substitute the “real” Excel.AppEvents_Event in the RecentWorkbooks constructor with anything that implements that interface, and the class should accept the impostor just fine.

wtf-pics-imposter-cow
Source: Picture Is Unrelated

Rather than roll an implementation of the Mocks manually, I leveraged Rhino.Mocks to generate our fakes. The test I ended up writing is below:

[Test]
public void WhenWorkbookIsOpenedItsNameShouldBeAppendedToNameList()
{
   var excel = MockRepository.GenerateStub<Excel.Application>();
   var recentWorkbooks = new RecentWorkbooks(excel);

   var workbook = MockRepository.GenerateStub<Excel.Workbook>();
   var name = "Shazam!";
   workbook.Stub(w => w.Name).Return(name);
   var args = new object[1];
   args[0] = workbook;

   excel.Raise(e => e.WorkbookOpen += null, args);

   var lastWorkbookName = recentWorkbooks.RecentWorkbookNames.Last();
   Assert.AreEqual(name, lastWorkbookName);
}

In plain English, this translates into “give me something that looks like Excel, and hook it up to the RecentWorkbooks class; then have the fake Excel fire WorkbookOpen, and pass a fake Workbook named “Shazam!” as the event argument. The last name in the list should be “Shazam!”.

This approach allows us to test virtually any interaction with the Excel model, for all 3 kinds of scenarios mentioned above. In particular, this is a very convenient way to simulate error/exception scenarios, which can be fairly complicated to generate using the “real thing”.

The only requirement is that the classes make the dependency to Excel explicit, injected through the constructor or properties, so that they can be mocked - which practically means

  • avoiding using the Globals class and the access it provides to the AddIn through the static property ThisAddIn,
  • using the ThisAddIn class as a bootstrapper, to initialize classes that contain the add-in logic and wire them to the Excel instance and the events.

Is this a good use of development time? As much as I like unit tests and TDD, I am on the fence. I can see the value in checking that our code handles events properly. On the other hand, the reads or writes to Excel are usually very simple calls, but in C# they have very intricate signatures, and mocking these calls seems tedious with limited added value, except maybe to simulate failure scenarios - for instance, if I can’t find the expected worksheet, does my class handle the situation gracefully, and perhaps log an error?

I guess I could be convinced, if I knew how to address the following issue. VSTO projects behave differently from typical class libraries. With a class library, you create your library project, another library with your tests, and you add a reference to the library under test project – and you are done. Whenever you rebuild, the unit tests library picks up the most recent version of the dll, and the code/build/test workflow just runs smoothly. For some reason, I could not get this to work with VSTO: the best approach I found was to directly point to the add-in dll in the bin folder, because the VSTO project doesn’t show up in the available projects to reference, and as a result, I have ran into glitches where the test project didn’t automatically refresh after a rebuild, or other similar problems, which is very annoying, and stressful. When I run my tests, I like to know that they fail or succeed because of my code; with this setup, I also have to worry about whether the results I observe are due to my current code, or to something not being properly refreshed.

Finally, unit tests are only one aspect of testing; essentially, the test I presented operates on the following premise: “if Excel behaves the way I believe it does, then my class is working properly”. In no way does it guarantee that when everything is wired together, Excel and the Add-In will work happily together – and I don’t know how to approach this, for the reason I mentioned earlier: the add-in is started by Excel, and I am not clear how to go about replicating this in an automated test.

If you work on VSTO projects, and have had some success with automated testing, I would love to hear how you approached it!

Comments

4/5/2010 1:59:21 PM #

Michael

Mathias,

I read the article and maybe I should look into mocking.
Currently I have generated a project that creates 1600 unit tests to write to specific cells that do the same thing but with different permutations.  Yes, I know that I do not need to do exhaustive testing but its automated and runs during lunch Smile.
I also use remoting  class that is internal to my Excel Add-in (debug build only) to access the toolbar methods (simulating button presses). I then check the values, formats, etc. that I expect for correctness.
This may not strictly TDD/unit testing, but to the end user, functionality is really what counts.

Michael United States | Reply

4/6/2010 5:59:48 AM #

mathias

Michael,
Thanks for the interesting comment! 1600 unit tests sounds pretty epic - and it's a good reason to take a comfortable lunch break ;)
if you are into test-driven development, I most definitely recommend looking into mocking (either Rhino.Mocks or Moq are good starting points). As your code grows, you are bound to have more dependencies, and these are hard to test; mocks allow you to keep your tests as de-coupled as your code, and help write tests that are maintainable - which run fast.
I definitely agree re: the fact that an app which does what the user wants is the only real benchmark!
I think the issues you describe and address - testing user interface actions, and persistence / writing - are often a pain to test automatically, expensive to write and brittle/hard to maintain, regardless of the type of app. I'd say it's a case by case decision as to whether it's reasonable to automate or not. How likely is the code to change, and how difficult would it be to update the tests accordingly?
Regarding the reads and writes to Excel, my approach has been to write a few utility functions (ex: given a worksheet and name as argument, read the named range into an array) and test them automatically against an actual Excel instance, using good old interop - and then call these from add-ins. In that case, my testing focus is to validate that the add-in is calling my utilities with the right arguments (because I can assume my utilities work), which I can test through mocking.
However, at some point, you will hit a limit, anyways, and I just think that manual testing has its room too. It's fine to check that the content of a cell has been written the way you expect, but suppose that your code creates a chart. What are you going to do? Writing an automated test to verify that the right chart has been created, maybe with the correct legend or scale, would be an absolute nightmare. At what point is it still worth it automating?
I have to look into the remoting option you mention; the more tools you have available, the easier it is to pick the right tool for the task!
Cheers, and happy testing.

mathias United States | Reply

4/6/2010 3:58:11 AM #

Dennis Wallentin

Hm, this is a subject which I still don't know exactly how to deal with for all my .NET based Excel development. I have read some articles and chapters and been, from time to time, testing it in a very small scale. I have been recommended to use Simple.NET (http://simpledotnet.codeplex.com/). I was hoping that Your blog entry would somehow guide me. But You seems to face the same situation as I do. Because of the nature with Excel development the approach is perhaps not justified in terms of cost benefit. I only know one guy that have been doing some heavy unit testing where Excel is involved but he didn't used mocking at all.

Kind regards,
Dennis

PS: To me it looks like there is four faked cows in the picture and only one strong-typed ;)

Dennis Wallentin United States | Reply

4/6/2010 7:38:56 AM #

mathias

Hi Dennis,
Good catch on the cows - the one on the right, with its nice, square features, is obviously strongly typed, the fuzzy furry ones are clearly fakes Smile
I had a quick look at simple.net, and it seems to do the same thing as classic mocking frameworks like Moq and Rhino.Mocks. Given that the project is in beta, I would probably try using one of these first, because they have been battle-tested, and reliability is important for your testing tools!
I would be really interested in hearing what your colleague has been using. I think Mocking is a great solution to handle a whole class of unit testing problems with VSTO; I am very curious about what technique he is using, and what aspects he can test with it.
What I am struggling with (besides the dll/build issues I mention at the end of the post, working on it!) is a double question: what would be a good technical solution to test the overall behavior of an add-in - and assuming that technical solution was available, would it be a good time investment? Even if I could instantiate the whole add-in + Excel setup and run it through test methods, it would still be extremely painful to validate scenarios, because you would have to verify the state of Excel through the Excel object model, and that makes for complicated and fragile test code.
If you consider for instance the VSTO tutorial I am putting together, up to that point, I think that Mocks would do the job well, because the only thing I care about is really whether Excel events trigger the right state changes in the Add-In. If time permits, I'll try out to write unit test for that, and see where this leads me.
That being said, Mocking clearly addresses only some aspects of VSTO testing problems - I would love to know what other people do, if only to know what other options are available, what their benefits and drawbacks are, so that I can pick the most sensible approach for each case. Add-ins can be very different, and I don't think there is a unique silver bullet which handles every case economically and efficiently!

mathias United States | Reply

4/6/2010 8:38:39 PM #

Dennis Wallentin

Mathias,

Aha - Thanks for the clarification Smile Your intention sounds interesting enough to stay tune.

I have asked him to pop in into this thread. From what I have learned he has only doing it with from an automation approach of Excel.

Kind regards,
Dennis

Dennis Wallentin Sweden | Reply

11/6/2010 4:21:54 PM #

trackback

Mocking Excel

Mocking Excel

Clear Lines Blog | Reply

9/5/2011 7:55:12 AM #

trackback

Mocking Excel

Mocking Excel

Clear Lines Blog | Reply

2/1/2012 9:32:37 PM #

Chhavi

Hi,

My requirement is to automate the testing of Microsoft Word Addin, This addin is having lot of buttons and lots of screen. I google and find out Coded UI Test is one of the solution but i am not convinced with Coded UI Test, other options that i found are UI Automation Library and other articles are at
blogs.msdn.com/.../...es-for-vsto-application.aspx
But still i didn't find the right approach.
Can you please guide me in the right direction How can i do the automation testing of Microsoft Word Addins

Chhavi United Kingdom | Reply

2/6/2012 8:54:57 PM #

Mathias

Hi Chhavi,
without more information it's difficult to provide guidance. What does the add-in do, and what type of scenario are you trying to test/automate?
Mathias

Mathias United States | Reply

Add comment




  Country flag

biuquote
  • Comment
  • Preview
Loading



Comments

Comment RSS