Maintaining Your Sanity in a Large Slack Instance

If you have ever worked in a large Slack instance, you know some of the pains that come along with it. The random @here or @channel and the subsequent responses admonishing the people for alerting 6000 of their closest friends. The pinging from channels that you only slightly care about. The people who forget that the chat instance is still a professional setting.

In our Slack instance, @here and @channel are enabled. They are useful in small channels, if used judiciously. Unfortunately, they cause a ruckus in larger channels, but people tend to learn pretty quickly not to abuse them. In my opinion this is an area where Slack can improve, by only allowing channel owners the ability to use @here and @channel. The could also provide another keyword that allows people to target the owners of a channel such as a @owners keyword.

In the two years or so that I have been working in a large Slack instance, I have learned a few things and thought I would share. Here are a few of my best practices when working with slack.

Organize Your Channels

Channels will fall into one of a few categories:

Lifeblood Channels

These channels are the channels you live in day in and day out. You want to know immediately when someone posts in these channels. A good post here will make your day, a bad post here could ruin a weekend.

Informational Channels

These channels provide useful information to you and when you get some time, you will catch up on the channel and you may chime in from time to time. You joined this channel because you have a vested interest in the topic at hand, but it may not affect your day to day work.

Corporate Channels

These channels are used by various levels of management to share information. They tend to have surges of information and then a lot of downtime. Usually this is required reading, but can be done at your leisure.

The Required Channel

You can’t leave this channel. Hopefully you have good admins that lock down who can post here. This is hopefully just a Corporate Channel you cannot leave.

The ignored channels

These channels have nothing to do with your job and, depending on the size of your slack instance, you may not even know they exist.


Once you’ve decided what category applies to a given channel you can then customize how that channel is displayed to you. Your lifeblood channels should be favorited by clicking the star in the top left corner of the channel. You can also star any direct message conversations you have with colleagues that you work with on a daily basis.

With your lifeblood channels and colleagues starred, you can now configure slack to always display those channels on the sidebar and hide all others. Under preferences, go to the sidebar options and select the My Unreads, along with everything I've starred option:

Sidebar Preferences Screenshot

All other channels will only display when there is new content in those channels. For those required, corporate, or information channels where there are a lot of posts, but only mildly useful content, you can mute those channels, to prevent extra alerts from appearing on your slack client.

Mute Channel

Muting a channel will prevent it from appearing unread, but you may still get badges by the channel for @here or @channel messages. For channels where @here/@channel get annoying, you can even mute those alerts in the channel notification preferences:

Mute Here

You can even star a channel you have muted, so it will appear in your list at all times, but you will never receive a notification from the channel. This is useful for those chatty channels that you just want to check in with from time to time.

Manage Your Alerts

If you don’t setup your alerts properly in Slack, your UI will be a mishmash of channels with various numbers by them and a whole bunch of bolded channels which you have joined over the course of doing business.

My recommendation is to only configure notifications for Direct Message, mentions & keywords. Again, this is under your preferences in the notifications section:

Notifications Screenshot

Keywords

Keywords are an awesome feature of Slack that provide notifications based on keywords. You can configure keywords to keep you abreast of others talking about your feature or product across your company. Any channel to which you belong where a keyword is mentioned will result in a notification. This is managed under the notifications section of your slack preferences.

Slack Keywords


Slack is a very useful tool, but it can get unwieldy fast. Using the settings I outlined above, your large slack instance will be more productive and less of a distraction.

Do you have a tip for making Slack more productive or less of a distraction? If so, share it below.

Special Folder Enum Values on Windows and Mac in .Net Core

On Windows it is common to use Environment.SpecialFolder to access certain folders instead of having to hard code the paths or write the appropriate lookup code for them. Now that code is being ported to Mac using .Net core, I thought I would document the various values that appear for the special folders when running .Net Core code on the Mac. Below is a table that contains the data for a user whose username is john on both the Windows Machine and the Mac OSX machine.

Enum Value Windows Value Mac Value
AdminTools C:\Users\john\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Administrative Tools  
ApplicationData C:\Users\john\AppData\Roaming /Users/john/.config
CDBurning C:\Users\john\AppData\Local\Microsoft\Windows\Burn\Burn  
CommonAdminTools C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools  
CommonApplicationData C:\ProgramData /usr/share
CommonDesktopDirectory C:\Users\Public\Desktop  
CommonDocuments C:\Users\Public\Documents  
CommonMusic C:\Users\Public\Music  
CommonOemLinks    
CommonPictures C:\Users\Public\Pictures  
CommonProgramFiles C:\Program Files\Common Files  
CommonProgramFilesX86 C:\Program Files (x86)\Common Files  
CommonPrograms C:\ProgramData\Microsoft\Windows\Start Menu\Programs  
CommonStartMenu C:\ProgramData\Microsoft\Windows\Start Menu  
CommonStartup C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup  
CommonTemplates C:\ProgramData\Microsoft\Windows\Templates  
CommonVideos C:\Users\Public\Videos  
Cookies C:\Users\john\AppData\Local\Microsoft\Windows\INetCookies  
Desktop C:\Users\john\Desktop /Users/john/Desktop
DesktopDirectory C:\Users\john\Desktop /Users/john/Desktop
Favorites C:\Users\john\Favorites /Users/john/Library/Favorites
Fonts C:\WINDOWS\Fonts /Users/john/Library/Fonts
History C:\Users\john\AppData\Local\Microsoft\Windows\History  
InternetCache C:\Users\john\AppData\Local\Microsoft\Windows\INetCache /Users/john/Library/Caches
LocalApplicationData C:\Users\john\AppData\Local /Users/john/.local/share
LocalizedResources    
MyComputer    
MyDocuments C:\Users\john\Documents /Users/john
MyDocuments C:\Users\john\Documents /Users/john
MyMusic C:\Users\john\Music /Users/john/Music
MyPictures C:\Users\john\Pictures /Users/john/Pictures
MyVideos C:\Users\john\Videos  
NetworkShortcuts C:\Users\john\AppData\Roaming\Microsoft\Windows\Network Shortcuts  
PrinterShortcuts    
ProgramFiles C:\Program Files /Applications
ProgramFilesX86 C:\Program Files (x86)  
Programs C:\Users\john\AppData\Roaming\Microsoft\Windows\Start Menu\Programs  
Recent C:\Users\john\AppData\Roaming\Microsoft\Windows\Recent  
Resources C:\WINDOWS\resources  
SendTo C:\Users\john\AppData\Roaming\Microsoft\Windows\SendTo  
StartMenu C:\Users\john\AppData\Roaming\Microsoft\Windows\Start Menu  
Startup C:\Users\john\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup  
System C:\WINDOWS\system32 /System
SystemX86 C:\WINDOWS\SysWOW64  
Templates C:\Users\john\AppData\Roaming\Microsoft\Windows\Templates  
UserProfile C:\Users\john /Users/john
Windows C:\WINDOWS  

The code for this is pretty straightforward. I enumerate over the possible enum values and output them to a CSV.

static void Main(string[] args)
{
    StringBuilder sb = new StringBuilder();
    foreach (Environment.SpecialFolder sf in Enum.GetValues(typeof(System.Environment.SpecialFolder)))
    {
        sb.AppendLine($"{sf.ToString()}, {Environment.GetFolderPath(sf)}");
    }
    var path = System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().FullName);
    var fileName = GetFileName();
    var filePath = System.IO.Path.Combine(path, $"{fileName}.csv");
    System.IO.File.WriteAllText(filePath, sb.ToString());
}

static string GetFileName()
{
    if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
        return "Win";
    else if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
        return "OSX";

    return "Linux";
}

If you just want to pull the code and run it, I have a copy up on GitHub. As you can see, some special folders have a direct mapping to Mac OSX and others do not. When you think about it, they all make sense. As long as you understand the values you will get back in the various scenarios, you can use the values that are appropriate for your application.

Dealing with Duplicate Assembly Attributes in .Net Core

When migrating a project from .Net Framework to .Net Standard, you may run into issues where you get duplicate assembly attributes. An example you might see is something like this:

Severity	Code	Description	Project	File	Line	Suppression State
Error	CS0579	Duplicate 'System.Reflection.AssemblyTitleAttribute' attribute MyProject
D:\Dev\MyProject\obj\Debug\netstandard2.0\MyProject.AssemblyInfo.cs	20	N/A

I ran into this because I have an AssemblyInfo.cs with an AssemblyTitleAttribute and the .Net Standard project is also generating the AssemblyTitleAttribute. After reading through some GitHub issues, it appears there are two ways around this issue.

First, I could remove the AssemblyInfo.cs that I already had in my project and add the appropriate attributes to the csproj file. Since I am converting a .Net Framework project in place with a new solution and csproj file, this will not work for me. I am left with the second option.

Add settings to the csproj file to indicate that the various attributes should not be generated. Here is an example csproj file with a few of the attributes disabled:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
        <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
        <GenerateAssemblyDescriptionAttribute>false</GenerateAssemblyDescriptionAttribute>
        <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
        <GenerateAssemblyTitleAttribute>false</GenerateAssemblyTitleAttribute>
    </PropertyGroup>    
</Project>

Once those settings are added to the csproj file, everything compiles and there are no duplicate attribute errors.

Simple Interprocess Communication in .Net Core using Protobuf

In the past, I have used WCF to handle inter-process communication (IPC) between various separate components of my client applications. Since .Net Core doesn’t yet support WCF server side code, I had to look into alternatives. The two main approaches to this that I have found are TCPServer and NamedPipeServerStream. Others have covered the TCP approach, so I wanted to see what could be done with the NamedPipeServerStream.

I started reading the MSDN documentation on the basics of IPC with named pipes and found that it worked with .Net Core 2.0 with no changes. This is the true benefit of .Net Core. An older article about IPC is still completely relevant even though the code is now running on a Mac instead of a Windows Machine. One thing I didn’t like too much about that article was the StreamString class and I wanted to see what I could do with plain old C# objects.

I decided to start try out Protobuf. I had heard about it in the past and figured this would be a good foray into learning more about it. Since I was developing a client and a server, I decided I would start with the API and put that into a shared class project. So I created a Common project, added a reference to protobuf, and defined a Person class in there:

[ProtoContract]
public class Person
{
    [ProtoMember(1)]
    public string FirstName { get; set; }

    [ProtoMember(2)]
    public string LastName { get; set; }
}

Decorating the class with the protobuf attributes was all I had to do. Now that it is defined in the common class, I could write a server to serve up the data and a client to consume the data, each referencing the Common library. Next up, I created the server. Following the linked example above, I defined the server console application as:

static void Main(string[] args)
{
    Console.WriteLine("Starting Server");

    var pipe = new NamedPipeServerStream("MyTest.Pipe", PipeDirection.InOut);
    Console.WriteLine("Waiting for connection....");
    pipe.WaitForConnection();

    Console.WriteLine("Connected");

    Serializer.Serialize(pipe, new Person() { FirstName="Janey", LastName = "McJaneFace" });
    pipe.Disconnect();
}

I am simply defining the NamedPipeServerStream to listen on a pipe named “MyTest.Pipe”. For now, the code immediately returns an object to the connection, that can be read from the client side. This is achieved using protobuf’s Serializer.Serialize method. To define the client, I need to use a NamedPipeClientStream to connect to the same pipe.

static void Main(string[] args)
{
    Console.WriteLine("Client");
    var pipe = new NamedPipeClientStream(".", "MyTest.Pipe", PipeDirection.InOut, PipeOptions.None);
    Console.WriteLine("Connecting");
    pipe.Connect();
    var person = Serializer.Deserialize<Person>(pipe);
    Console.WriteLine($"Person: {person.FirstName} {person.LastName}");
    Console.WriteLine("Done");
}

Once I connect, I then use protobuf’s Serializer.Deserialize method to read from the stream and deserialize the person object. That’s it. I am passing data from one process to another in .Net core. If you are using .Net Core 1.x, you will need to explicitly add a reference to the System.IO.Pipes nuget package. And for both 1.x and 2.0 .net core, you need to add a nuget reference to protobuf.

Even though this is a basic example, it does demonstrate the functionality and could be easily extended to handle much more complex scenarios.

A fully working solution for this can be found as a sample GitHub project. There appear to be other .Net Core/Standard projects(1, 2) attempting to better facilitate IPC and it will be interesting to see how they mature with the ecosystem. My hope is that some flavor of WCF server makes its way over to .Net core, to make porting code that much easier.

Minified Javascript not Deploying With .Net Core Projects Created in Visual Studio 2017

I was working on a very simple site that I created using the new .Net Core project templates in Visual Studio 2017. Everything worked great on my machine, but, when I deployed to Azure, none of my custom javascript or CSS were working properly. What gives?

After doing some digging, I found that the deployed site was trying to use the site.min.js and the site.min.css, but those files weren’t deployed to Azure. After googling a bit, I found that it was probably an issue with my bundling and when I opened the bundleconfig.json, Visual Studio tried to be helpful:

Extensions are available...

Of course, I ignored the extension warning and comment at first, but the extension that is missing solves the exact problem I was having. The link in the comments has an article on how to enable and configure bundling in ASP.NET Core.

So, while the Visual Studio team could work on making this a better experience, I have to remember to read the warnings and comments that are left in the generated code. They are there for a reason.

Zero to CI in Thirty Minutes Or Less (Or its free!)

One of the biggest complaints I hear from teams about CI is that it is too much work. While getting it perfect can be a lot of work, getting started couldn’t be easier.

I am going to demonstrate continuously building a C# project using Jenkins as our CI host.

To get started we’ll need a machine to be our build agent. I am going to create a VM in Azure to be my build agent. Since I am building a C# project, I am going to choose the template that already has Visual Studio 2017 installed on it. But this could be any machine. It could be an extra machine you have sitting under your desk or a virtual machine in your own datacenter.

Azure template image

Once the machine is created, you can connect to it and install Jenkins. Start by downloading and running the windows installer.

Once installed, a browser window will open that you can use to administer Jenkins. It may open before Jenkins has a chance to start, so you may need to refresh the page. Follow the instructions on the page to unlock Jenkins for first time use.

You will be prompted to install plugins. Plugins are the lifeblood of the Jenkins ecosystem and there are plugins to do pretty much everything. You can start by installing the suggested plugins.

Customize Jenkins

This will install a handful of plugins to get us started. Once complete, you will be asked to setup an admin user. Go through the steps of setting up the user and then you can start using Jenkins. At this point, Jenkins is ready to go.

Jenkins is Ready

We are going to create a job to build a simple C# library I am hosting on GitHub to demonstrate Jenkins builds. Now we can create a new job and give it a name. The easiest project to configure in Jenkins is a freestyle project. This allows you to do any type of build you want by combining source control and scripts to accomplish your task (along with features from whatever plugins you have installed).

Jenkins freestyle project

Next, we will configure the project to pull from GitHub and give it a little batch script to build the project and run the tests. In the Source Code Management section, we will select Git and enter our repository URL.

Jenkins Source Control Configuration

Then in the Build section, we will setup our script to build the project. Since this is a .Net core project and I have the 2017 core tools installed, I can simply specify a batch command with the following script:

dotnet restore
dotnet build
cd UnitTests
dotnet test

Save the job. Then click the Build Now button on the left hand side. This will start a job which will appear in the Build History portion of the page. You can click on the build number to get more information about the build. The most useful information in here is the console. If you click on the Console Output you can see the full console output of your build. Since this is your first build on the machine, you will see information about populating your local cache for the first time and then you should see the project build output, and finally you should see the tests run and see them all pass.

At this point, we have a build server that builds our project on demand, but not continuously. To set that up, we can go back to the project page and select the Configure option. We’ll use the Poll SCM option to configure the job to poll for changes from GitHub every 15 minutes. In the Schedule box, enter the following value

H/15 * * * *

The format for this schedule mainly follows the cron syntax. Clicking on the ? next to the schedule box will give you plenty of examples and information on how you might want to configure your job. Save the job and you are good to go.

You now have a build server that will build within 15 minutes of a change to the repository. Congratulations, you have a CI server. As you can see, getting started with CI is not hard and there is really no excuse for not having some sort of automations around your builds.



FAQ

Q:But John, why did you ignore webhooks?

A: While setting up webhooks is very straightforward, securing a Jenkins installation to be accessible via the internet is another thing altogether. I decided that using polling was a better approach than teaching people to setup an insecure Jenkins installation and having them get hacked. I’ll probably have a few more posts where I cover setting up webhooks for your Jenkins jobs.

Using a CSharp Syntax Rewriter

One interesting feature of the roslyn engine is the CSharpSyntaxRewriter. This is a base class which you can extend that implements the visitor pattern. You simply override the Visit method that is appropriate for your use case and you can rewrite any portion of a syntax tree.

Consider the following code:

public class Foo
{
    public string _bar = "baz";
}

Now, let’s say we want to change "baz" to something else. To do this, we simply implement a new CSharpSyntaxRewriter that overrides the VisitLiteralExpression so that our code is only executed on LiteralExpression syntax nodes. We then check to see if it is a StringLiteralExpression and if it is, then we create a new node.

class LiteralRewriter : CSharpSyntaxRewriter
{
    public override SyntaxNode VisitLiteralExpression(LiteralExpressionSyntax node)
    {
        if (!node.IsKind(SyntaxKind.StringLiteralExpression))
        {
            return base.VisitLiteralExpression(node);
        }

        var retVal = SyntaxFactory.LiteralExpression(SyntaxKind.StringLiteralExpression, 
                                                      SyntaxFactory.Literal("NotBaz"));
        return retVal;
    }
}

The rewriter can be passed any SyntaxNode and it will run on that node and its descendants. So, to use this, I can get a SyntaxTree from a SemanticModel that I get from a CSharpCompilation. Here is a full working sample:

var tree = CSharpSyntaxTree.ParseText(@"
  public class Foo
  {
      public string _bar = ""baz"";
  }");
var Mscorlib = MetadataReference.CreateFromFile(typeof(object).Assembly.Location);
var compilation = CSharpCompilation.Create("MyCompilation",
    syntaxTrees: new[] { tree }, references: new[] { Mscorlib });
var model = compilation.GetSemanticModel(tree);
var root = model.SyntaxTree.GetRoot();

var rw = new LiteralRewriter();
var newRoot = rw.Visit(root);

Console.WriteLine(newRoot.GetText());
Console.ReadLine();

As you can see, a rewriter is a quick and easy way to manipulate a syntax node that you have access to. It’s a great tool to use when you have some simple changes you need to make to a syntax tree.

If you have come up with a good use for syntax rewriters, leave a comment below.

Windows Installer Error Codes

My team was recently working on our installer when we ran into a return code from one of our pre-requisites (specifically the Visual C++ 2015 Runtime). From the logs, we found the error code was 0x80070666 and the return code was 0x666 (a newer version of the runtime was installed). We could easily handle this scenario, but we were looking to find an extensive list of return codes for the C++ redistributables and could not easily find them. Eventually we ran across the list of MsiExec.exe and InstMsi.exe Error Messages and figured out why our searches were yielding no results. The error codes in the log files are all in hex and the error codes on the website are all in decimal. So, to help others with this in the future, here is a list of all of the error codes from that link with their hex equivalent.

Error code Value Description Hex Error Code
ERROR_SUCCESS 0 The action completed successfully. 0x0 0x80070000
ERROR_INVALID_DATA 13 The data is invalid. 0xD 0x8007000D
ERROR_INVALID_PARAMETER 87 One of the parameters was invalid. 0x57 0x80070057
ERROR_CALL_NOT_IMPLEMENTED 120 This value is returned when a custom action attempts to call a function that cannot be called from custom actions. The function returns the value ERROR_CALL_NOT_IMPLEMENTED. Available beginning with Windows Installer version 3.0. 0x78 0x80070078
ERROR_APPHELP_BLOCK 1259 If Windows Installer determines a product may be incompatible with the current operating system, it displays a dialog box informing the user and asking whether to try to install anyway. This error code is returned if the user chooses not to try the installation. 0x4EB 0x800704EB
ERROR_INSTALL_SERVICE_FAILURE 1601 The Windows Installer service could not be accessed. Contact your support personnel to verify that the Windows Installer service is properly registered. 0x641 0x80070641
ERROR_INSTALL_USEREXIT 1602 The user cancels installation. 0x642 0x80070642
ERROR_INSTALL_FAILURE 1603 A fatal error occurred during installation. 0x643 0x80070643
ERROR_INSTALL_SUSPEND 1604 Installation suspended, incomplete. 0x644 0x80070644
ERROR_UNKNOWN_PRODUCT 1605 This action is only valid for products that are currently installed. 0x645 0x80070645
ERROR_UNKNOWN_FEATURE 1606 The feature identifier is not registered. 0x646 0x80070646
ERROR_UNKNOWN_COMPONENT 1607 The component identifier is not registered. 0x647 0x80070647
ERROR_UNKNOWN_PROPERTY 1608 This is an unknown property. 0x648 0x80070648
ERROR_INVALID_HANDLE_STATE 1609 The handle is in an invalid state. 0x649 0x80070649
ERROR_BAD_CONFIGURATION 1610 The configuration data for this product is corrupt. Contact your support personnel. 0x64A 0x8007064A
ERROR_INDEX_ABSENT 1611 The component qualifier not present. 0x64B 0x8007064B
ERROR_INSTALL_SOURCE_ABSENT 1612 The installation source for this product is not available. Verify that the source exists and that you can access it. 0x64C 0x8007064C
ERROR_INSTALL_PACKAGE_VERSION 1613 This installation package cannot be installed by the Windows Installer service. You must install a Windows service pack that contains a newer version of the Windows Installer service. 0x64D 0x8007064D
ERROR_PRODUCT_UNINSTALLED 1614 The product is uninstalled. 0x64E 0x8007064E
ERROR_BAD_QUERY_SYNTAX 1615 The SQL query syntax is invalid or unsupported. 0x64F 0x8007064F
ERROR_INVALID_FIELD 1616 The record field does not exist. 0x650 0x80070650
ERROR_INSTALL_ALREADY_RUNNING 1618 Another installation is already in progress. Complete that installation before proceeding with this install. 0x652 0x80070652
ERROR_INSTALL_PACKAGE_OPEN_FAILED 1619 This installation package could not be opened. Verify that the package exists and is accessible, or contact the application vendor to verify that this is a valid Windows Installer package. 0x653 0x80070653
ERROR_INSTALL_PACKAGE_INVALID 1620 This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package. 0x654 0x80070654
ERROR_INSTALL_UI_FAILURE 1621 There was an error starting the Windows Installer service user interface. Contact your support personnel. 0x655 0x80070655
ERROR_INSTALL_LOG_FAILURE 1622 There was an error opening installation log file. Verify that the specified log file location exists and is writable. 0x656 0x80070656
ERROR_INSTALL_LANGUAGE_UNSUPPORTED 1623 This language of this installation package is not supported by your system. 0x657 0x80070657
ERROR_INSTALL_TRANSFORM_FAILURE 1624 There was an error applying transforms. Verify that the specified transform paths are valid. 0x658 0x80070658
ERROR_INSTALL_PACKAGE_REJECTED 1625 This installation is forbidden by system policy. Contact your system administrator. 0x659 0x80070659
ERROR_FUNCTION_NOT_CALLED 1626 The function could not be executed. 0x65A 0x8007065A
ERROR_FUNCTION_FAILED 1627 The function failed during execution. 0x65B 0x8007065B
ERROR_INVALID_TABLE 1628 An invalid or unknown table was specified. 0x65C 0x8007065C
ERROR_DATATYPE_MISMATCH 1629 The data supplied is the wrong type. 0x65D 0x8007065D
ERROR_UNSUPPORTED_TYPE 1630 Data of this type is not supported. 0x65E 0x8007065E
ERROR_CREATE_FAILED 1631 The Windows Installer service failed to start. Contact your support personnel. 0x65F 0x8007065F
ERROR_INSTALL_TEMP_UNWRITABLE 1632 The Temp folder is either full or inaccessible. Verify that the Temp folder exists and that you can write to it. 0x660 0x80070660
ERROR_INSTALL_PLATFORM_UNSUPPORTED 1633 This installation package is not supported on this platform. Contact your application vendor. 0x661 0x80070661
ERROR_INSTALL_NOTUSED 1634 Component is not used on this machine. 0x662 0x80070662
ERROR_PATCH_PACKAGE_OPEN_FAILED 1635 This patch package could not be opened. Verify that the patch package exists and is accessible, or contact the application vendor to verify that this is a valid Windows Installer patch package. 0x663 0x80070663
ERROR_PATCH_PACKAGE_INVALID 1636 This patch package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer patch package. 0x664 0x80070664
ERROR_PATCH_PACKAGE_UNSUPPORTED 1637 This patch package cannot be processed by the Windows Installer service. You must install a Windows service pack that contains a newer version of the Windows Installer service. 0x665 0x80070665
ERROR_PRODUCT_VERSION 1638 Another version of this product is already installed. Installation of this version cannot continue. To configure or remove the existing version of this product, use Add/Remove Programs in Control Panel. 0x666 0x80070666
ERROR_INVALID_COMMAND_LINE 1639 Invalid command line argument. Consult the Windows Installer SDK for detailed command-line help. 0x667 0x80070667
ERROR_INSTALL_REMOTE_DISALLOWED 1640 The current user is not permitted to perform installations from a client session of a server running the Terminal Server role service. 0x668 0x80070668
ERROR_SUCCESS_REBOOT_INITIATED 1641 The installer has initiated a restart. This message is indicative of a success. 0x669 0x80070669
ERROR_PATCH_TARGET_NOT_FOUND 1642 The installer cannot install the upgrade patch because the program being upgraded may be missing or the upgrade patch updates a different version of the program. Verify that the program to be upgraded exists on your computer and that you have the correct upgrade patch. 0x66A 0x8007066A
ERROR_PATCH_PACKAGE_REJECTED 1643 The patch package is not permitted by system policy. 0x66B 0x8007066B
ERROR_INSTALL_TRANSFORM_REJECTED 1644 One or more customizations are not permitted by system policy. 0x66C 0x8007066C
ERROR_INSTALL_REMOTE_PROHIBITED 1645 Windows Installer does not permit installation from a Remote Desktop Connection. 0x66D 0x8007066D
ERROR_PATCH_REMOVAL_UNSUPPORTED 1646 The patch package is not a removable patch package. Available beginning with Windows Installer version 3.0. 0x66E 0x8007066E
ERROR_UNKNOWN_PATCH 1647 The patch is not applied to this product. Available beginning with Windows Installer version 3.0. 0x66F 0x8007066F
ERROR_PATCH_NO_SEQUENCE 1648 No valid sequence could be found for the set of patches. Available beginning with Windows Installer version 3.0. 0x670 0x80070670
ERROR_PATCH_REMOVAL_DISALLOWED 1649 Patch removal was disallowed by policy. Available beginning with Windows Installer version 3.0. 0x671 0x80070671
ERROR_INVALID_PATCH_XML 1650 The XML patch data is invalid. Available beginning with Windows Installer version 3.0. 0x672 0x80070672
ERROR_PATCH_MANAGED _ADVERTISED_PRODUCT 1651 Administrative user failed to apply patch for a per-user managed or a per-machine application that is in advertise state. Available beginning with Windows Installer version 3.0. 0x673 0x80070673
ERROR_INSTALL_SERVICE_SAFEBOOT 1652 Windows Installer is not accessible when the computer is in Safe Mode. Exit Safe Mode and try again or try using System Restore to return your computer to a previous state. Available beginning with Windows Installer version 4.0. 0x674 0x80070674
ERROR_ROLLBACK_DISABLED 1653 Could not perform a multiple-package transaction because rollback has been disabled. Multiple-Package Installationscannot run if rollback is disabled. Available beginning with Windows Installer version 4.5. 0x675 0x80070675
ERROR_INSTALL_REJECTED 1654 The app that you are trying to run is not supported on this version of Windows. A Windows Installer package, patch, or transform that has not been signed by Microsoft cannot be installed on an ARM computer. 0x676 0x80070676
ERROR_SUCCESS_REBOOT_REQUIRED 3010 A restart is required to complete the install. This message is indicative of a success. This does not include installs where the ForceReboot action is run. 0xBC2 0x80070BC2

Any installer that uses standard MSI technology will likely use these same error codes(i.e the Microsoft C++ redistributables (vcredist_x86.exe, vcredist_x64.exe)).

Sending a Project to the C# Interactive Window in VS 2015 Update 2

Visual Studio 2015 Update 2 is currently in release candidate was just released and one of the cool new features is the ability to send a project to the C# interactive window.

This allows you to right click on a project and send it to the interactive window and then utilize the classes and methods in that project from the interactive window. This can be very useful if you just want to do some quick prototyping or testing of your methods.

To enable this functionality, right click on your project and select the Initialize Interactive With Project menu item.

InitializeInteractiveWithProject

You'll see it build your project and then add all of the dependencies and the project as a reference to the interactive window.

> #reset
Resetting execution engine.
Loading context from 'CSharpInteractive.rsp'.
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.Core.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.Xml.Linq.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.Data.DataSetExtensions.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\Microsoft.CSharp.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.Data.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.Net.Http.dll"
> #r "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5.1\System.Xml.dll"
> #r "ConsoleApplication1.exe"
> using ConsoleApplication1;
>

My console application has an Add method in the Baz class. To use it, I can simply do the following:

> Baz.Add(4,5)
9

As you can see, this new functionality is moving closer to what is available in the F# interactive window and I hope to see more developers grab onto this functionality to improve their development workflows, as it can really change the way you work.

Searching and Filtering Tests in Test Explorer

If you take a quick glance at the Test Explorer window in Visual Studio 2015, you might not notice all of the power that you have in that little window. To start, there is a grouping icon, that allows you to group your tests by various properties. The grouping is a great way to get common tests together so you can easily select them and run them in test explorer.

TestExplorerGroupBy

You have the option to group by:

Group By Description
Class This will group by the class name to which the test belongs. Note: This is not the fully qualified class name, so if you have multiple classes in different namespaces with the same name, they will be hard to differentiate.
Duration This will group by the duration of the last test run. It uses 3 categories when grouping (Fast < 100ms, Medium > 100ms, and Slow > 1 sec).
Outcome This will group by the outcome of the last run of the tests. It uses 3 categories when grouping (Passed Tests, Failed Tests, and Skipped Tests).
Traits This will group tests based on the TestCategory, Owner, Priority, and TestProperty attributes assigned to tests. Note: Tests can have multiple trait attributes assigned to them and thus a single test could appear multiple times in this view.
Project This will group tests by the project to which they belong.

While the grouping is nice, the real power in this dialog is the search feature. From the documentation, you can search on the following:

Qualifier Description
Trait Searches both trait category and value for matches. The syntax to specify trait categories and values are defined by the unit test framework.
Project Searches the test project names for matches.
Error Message Searches the user-defined error messages returned by failed asserts for matches.
File Path Searches the fully qualified file name of test source files for matches.
Fully Qualified Name Searches the fully qualified file name of test namespaces, classes, and methods for matches.
Output Searches the user-defined error messages that are written to standard output (stdout) or standard error (stderr). The syntax to specify output messages are defined by the unit test framework.
Outcome Searches the Test Explorer category names for matches: Failed Tests, Skipped Tests, Passed Tests.

Let's take an example. Say I have the following tests in my system (implementations removed for brevity):

[TestMethod]
public async Task TestMethodNone()

[TestMethod, TestCategory("Unit")]
public async Task TestMethodUnit()

[TestMethod, TestCategory("DAL"), TestCategory("Unit")]
public async Task TestMethodDALUnit()

[TestMethod, TestCategory("DAL"), TestCategory("Unit")]
public async Task TestMethodDALUnit2()

[TestMethod, TestCategory("DAL"), TestCategory("Integration")]
public async Task TestMethodADALIntegration()

If I group by trait and don't filter anything, then I'll see the following tests:

TestExplorerGroupedByTrait

Next, I could filter the tests by specifying I only want tests with the Unit trait. The search term would be Trait:"Unit":

TestExplorerUnitTestsOnly

I can also filter to only show tests that are both DAL and Unit tests by using the search term Trait:"Unit" Trait:"DAL":

TestExplorerUnitAndDAL

If I want to exclude tests with a given attribute, I could exclude all DAL tests by using the minus symbol, so my search term would be Trait:"Unit" -Trait:"DAL":

TestExplorerUnitNotDAL

You can also pair this with other searchable attributes on the tests. So, after a test run, if I want to find all unit tests that failed, I could use the search term Trait:"Unit" Outcome:"Failed":

TestExplorerFailedUnitTests

As you can see, the grouping and filtering available to you in the Test Explorer window is pretty robust; it just takes a little time to dig into it and learn the syntax. The Run unit tests with Test Explorer article on MSDN gives a lot of good information on this topic and is a worthwhile read if you are using this window in your day to day work. Thanks to William Custode for asking a question on StackOverflow that gave me inspiration for this blog post.

Editing Files on your Azure Hosted Wordpress Site

I host this blog on Windows Azure using WordPress. There have been times where I needed to edit my web.config file or manually remove some plugins that were causing trouble. After searching around quite a bit, I found that the easiest way to do this is to use Visual Studio Online to edit the files in my hosted site.

To start, you need to add the Visual Studio Online extension to your Web App. To do this, select the tools option from your Web App and select the Extensions menu in the Develop section.

AzureTools

Next, select the Add button to add a new extension.

AddExtension

Select Choose Extension and find the Visual Studio Online extension.

VisualStudioExtension

Once installed, you can then select the Visual Studio Online extension from the Extensions view and select the Browse button. This will launch a new window from which you can explore the contents.

Browse

For, example, you can select and open your web.config file and quickly make changes to it.

VSOEditWebConfig

I have found this tool useful a few times, allowing me to enable woff files and to enable SSL on all pages. Hopefully you find it as useful as I do.

What does Environment.OSVersion return on Windows 10?

Unfortunately, the answer is, it depends.

Consider the following code:

Console.WriteLine(Environment.OSVersion.ToString());

Depending how you execute the code, you may get differing results:

Visual Studio Version Project Type Output
2015 Update 1 Console App Microsoft Windows NT 6.2.9200.0
2015 Update 1 Unit Test Microsoft Windows NT 10.0.10586.0
2015 (No update 1) Console App Microsoft Windows NT 6.2.9200.0
2015 (No update 1) Unit Test Microsoft Windows NT 6.2.9200.0

So, how can we get our console application to be consistent with the test application in VS 2015 update 1? The key is manifest files. According to the Operating System Version documentation:

For applications that have been manifested for Windows 8.1 or Windows 10. Applications not manifested for Windows 8.1 or Windows 10 will return the Windows 8 OS version value (6.2). To manifest your applications for Windows 8.1 or Windows 10, refer to Targeting your application for Windows.

So we can add a manifest file to our console application to specify Windows 10 compatibility:

<?xml version="1.0" encoding="utf-8"?>
<assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1">
  <assemblyIdentity version="1.0.0.0" name="MyApplication.app"/>
  <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
    <application>
      <!-- Windows 10 -->
      <supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}" />
    </application>
  </compatibility>
</assembly>

Now, when we run the console application, we get the output of Microsoft Windows NT 10.0.10586.0

So, what changed in Visual Studio 2015 Update 1? This is speculation, but I am guessing they added a manifest to the executable the runs the unit tests behind the scenes, which caused the Environment.OSVersion to return the new value for Windows 10. As the documentation states:

Identifying the current operating system is usually not the best way to determine whether a particular operating system feature is present. This is because the operating system may have had new features added in a redistributable DLL. Rather than using the Version API Helper functions to determine the operating system platform or version number, test for the presence of the feature itself.

So, if you plan on using Enivronment.OSVersion in the future, be sure to understand the different values it could return depending on how the code is hosted.

Unable to Resolve DNX Dependencies on OSX

I was building dot net core console applications on OSX and ran into an issue where certain core dependencies were not resolving when running dnu build. The error output I would get would be:

Building ConsoleApplication2 for DNX,Version=v4.5.1
  Using Project dependency ConsoleApplication2 1.0.0
    Source: /Users/john/Dev/ConsoleApplication2/project.json

  Unable to resolve dependency fx/mscorlib

  Unable to resolve dependency fx/System

  Unable to resolve dependency fx/System.Core

  Unable to resolve dependency fx/Microsoft.CSharp
  ...

After doing some testing, I found that if I specified the core framework (dnxcore50) to the build command, then it would work properly.

dnu build --framework dnxcore50

After doing some more digging, I found that I did not have the mono framework installed on the machine, and thus the v4.5.1 build was failing. To correct this, I simply had to install the mono framework using dnvm:

dnvm upgrade -r mono

After that, the dnu build works properly and I can continue on my way.

Parallel Test Execution in Visual Studio 2015 Update 1 Might Not Be What You Expect

In the Update 1 of Visual Studio 2015, it was announced that MSTest will support running unit tests in parallel. I decided to give this a shot and see exactly how it worked. I started by writing 8 unit tests that all looked like this:

[TestMethod]
public async Task TestMethod7()
{
    Console.WriteLine("1");
    await Task.Delay(5000);
}

Next, I added the RunSettings file to my project with a MaxCpuCount of 6:

<?xml version='1.0' encoding='utf-8'?>
<RunSettings>
  <RunConfiguration>
    <MaxCpuCount>6</MaxCpuCount>
  </RunConfiguration>
</RunSettings>

Finally, I selected my run settings file from the Test Settings Menu:

SelectTestSettings

I ran it and all of my tests still ran in serial. I thought maybe I had done something wrong or perhaps hit a bug with the new feature, so I returned to the update 1 announcement and found my answer. It states:

Parallel test execution leverages the available cores on the machine, and is realized by launching the test execution engine on each available core as a distinct process, and handing it a container (assembly, DLL, or relevant artifact containing the tests to execute), worth of tests to execute.

The separate container bit is the piece I was missing. In order to get my tests to run in parallel, I needed to split up my tests into separate test assemblies. After doing that, I saw that the tests in different assemblies were running in parallel.

The distinction of tests running in parallel across assemblies is a subtle point that may cause confusion if you think just setting the MaxCpuCount in a run settings file is going to give you benefit on a single test assembly. Overall, I am glad to see that Microsoft is still improving MSTest and hopefully they continue to add features that will allow us to better parallelize our testing.

Update (2016-06-15) - I create a sample set of tests on GitHub to better demonstrate this functionality.

Recording your screen using Skype For Business

In this video, I demonstrate how to record your screen using Skype For Business.

The C# Interactive Window in Visual Studio 2015 Update 1

The next update of Visual Studio 2015 will contain the C# interactive window. This is a REPL that allows you to execute C# code. To access this, open the View=>Other Windows=>C# Interactive menu item.

C# Interactive Menu

Now that you have the window open, the first thing you should do is read the help, so you get the lay of the land. Type #help to get a list of commands. Here you can see some of the keyboard shortcuts for moving between various submissions and some commands to clear (#cls) and reset the window (#reset). Note that as of the current release candidate I am experiencing some issues with the arrow keys not behaving as described.

Now you can start typing valid C# code and it will be evaluated in the window. For example, I could type the following:

> var x = 1;
> x = x + 1;
> x++;

Each of these statements are evaluated and the value of x is maintained as it is mutated by these commands. Finally, I can type x and hit Ctrl-Enter or Enter twice to evaluate the value of x:

> x
3

You can also declare using statements inside the interactive window and then have access to all the members in the namespace.

> using System.Threading;
> Thread.Sleep(500);

You can declare functions inside the REPL by simply typing or pasting the function into the window.

> public int AddNumbers(int x, int y)
. {
.     return x + y;
. }
> AddNumbers(5,6)
11

If you need to load another DLL into the REPL, you can do that using the #r command:

> #r "Z:\Dev\ConsoleApplication29\bin\Debug\ConsoleApplication29.exe"
> using ConsoleApplication29;
> var x = new Something();
> x.DoSomething();

As you can see it is very easy to get going quickly with the C# interactive window and it provides you a quick way of prototyping your C# code. The REPL is used heavily in F# development and I think C# developers will find great ways to leverage it as part of their development workflows.

CPU Profiling in Visual Studio 2015 Update 1

In the past, I have covered the Diagnostics Tools and all of the new features that are available in VS 2015. With the upcoming release of update 1, some new features are being added to the diagnostics tools.

When you open the diagnostics tools, you'll notice the CPU Usage tab now shows you information like the Total CPU % and Self CPU %.

DiagnosticToolsCPU

The total CPU % covers the method and all child methods that it calls, while the self CPU is limited to only the code within the method, excluding all child calls. Using these values you can quickly determine what are your most costly functions in terms of CPU %.

The CPU percentage also works with the range selection in diagnostics tools window. So, if you want to just hone in on one section of your app where the CPU is spiking, this tool allows you to do this.

[video width="1028" height="580" mp4="/content/DiagnosticToolsWithCPU.mp4"][/video]

Overall, I think this is a great addition to Visual Studio and I can't wait to see how people use it. If you want more in depth information on the new CPU features in the diagnostics tools, Nikhil Joglekar has a blog post that covers it in quite a bit of detail.

Creating a Code Analyzer using F#

Note: This article covers creating a C#/VB code analyzer using F#. At this time there is no Roslyn support for analyzing F# code.

In the past we have covered creating code analyzers using C# and VB. Creating an analyzer in F# is just as easy. I have just started the process of learning F# and figured an analyzer would be a great test project to learn on. There aren’t official templates for F# analyzers, but you can take the C# templates and use those as a starting point for F#.

To start, make sure you have the latest version of the .Net Compiler Platform SDK installed.

Next, you’ll want to create an “Analyzer with Code Fix (NuGet + VSIX)” project under the Visual C#->Extensibility group in the project templates.

New Project Dialog

Once you have the project created, add a new F# library project to the solution. With the new project added to the solution, you’ll next want to modify the VSIX project to deploy the F# project instead of the C# project. To do this, modify the source.extension.vsixmanifest file and go to the Assets tab. Switch the project on both the Analyzer and MefComponent to be the new F# library.

VsixManifest

Now you can remove the C# analyzer and test project form the solution.

You solution is now set, but the F# project needs the appropriate references in order to work with analyzers. Add the Microsoft.CodeAnalysis NuGet package to the F# project.

Now we can start coding the analyzer. We’ll implement the same basic analyzer that comes with the C# samples, where it raises a diagnostic whenever there are lowercase characters in type names.

We’ll start creating our analyzer by importing some namespaces we’ll need later in the code.

namespace FSharpFirstAnalyzer
open Microsoft.CodeAnalysis
open Microsoft.CodeAnalysis.Diagnostics
open System.Collections.Immutable
open System.Linq
open System

Next we’ll declare our analyzer and inherit from the DiagnosticAnalyzer base class. Notice that we are registering our analyzer as a C# only analyzer.

[<DiagnosticAnalyzer(Microsoft.CodeAnalysis.LanguageNames.CSharp)>]
type public MyFirstFSAnalyzer() = 
    inherit DiagnosticAnalyzer()

Now we can create a descriptor for our diagnostic and override the SupportedDiagnostics property to return our diagnostics.

let descriptor = DiagnosticDescriptor("FSharpIsLowerCase", 
                            "Types cannot contain lowercase letters", 
                            "{0} contains lowercase letters" , 
                            "Naming", 
                            DiagnosticSeverity.Warning, 
                            true, 
                            "User declared types should not contain lowercase letters.", 
                            null)

override x.SupportedDiagnostics with get() = ImmutableArray.Create(descriptor)

Finally, we can do our work in the Initialize override. We’ll create a symbol analysis function and check if the symbol has any lowercase characters. To do this, we’ll perform a match on the Symbol.Name. Finally we’ll register that function with the context passed into the Initialize method.

    override x.Initialize (context: AnalysisContext) =
        let isLower = System.Func<_,_>(fun l -> Char.IsLower(l))
        let analyze (ctx: SymbolAnalysisContext) = 
            match ctx.Symbol with
                | z when z.Name.ToCharArray().Any(isLower) -> 
                    let d = Diagnostic.Create(descriptor, z.Locations.First(), z.Name)
                    ctx.ReportDiagnostic(d)
                | _->()

        context.RegisterSymbolAction(analyze, SymbolKind.NamedType)

At this point, we can run the analyzer solution and a new Visual Studio instance will appear. This is referred to as the experimental instance and has completely isolated settings from the instance in which you do your main development. Create a simple C# console application and open the Program.cs. You should get a diagnostic on the Program class indicating it contains lowercase letters.

lowercaseLetters

The full code for our F# analyzer is:

namespace FSharpFirstAnalyzer
open Microsoft.CodeAnalysis
open Microsoft.CodeAnalysis.Diagnostics
open System.Collections.Immutable
open System.Linq
open System

[<DiagnosticAnalyzer(Microsoft.CodeAnalysis.LanguageNames.CSharp)>]
type public MyFirstFSAnalyzer() = 
    inherit DiagnosticAnalyzer()
    let descriptor = DiagnosticDescriptor("FSharpIsLowerCase", 
                            "Types cannot contain lowercase letters", 
                            "{0} contains lowercase letters" , 
                            "Naming", 
                            DiagnosticSeverity.Warning, 
                            true, 
                            "User declared types should not contain lowercase letters.", 
                            null)

    override x.SupportedDiagnostics with get() = ImmutableArray.Create(descriptor)

    override x.Initialize (context: AnalysisContext) =
        let isLower = System.Func<_,_>(fun l -> Char.IsLower(l))
        let analyze (ctx: SymbolAnalysisContext) = 
            match ctx.Symbol with
                | z when z.Name.ToCharArray().Any(isLower) -> 
                    let d = Diagnostic.Create(descriptor, z.Locations.First(), z.Name)
                    ctx.ReportDiagnostic(d)
                | _->()

        context.RegisterSymbolAction(analyze, SymbolKind.NamedType)

As you can see, creating an analyzer in F# is possible, and once you have the tooling setup, the development flow is not much different than that of a C# or VB analyzer. Overall, I think matching functionality in F# provides interesting possibilities when creating analyzers for Roslyn.

Analyzing Problematic Lambda Expressions used in Event Handlers

I was recently reading an article by Bill Wagner. At the end of the article Bill covers a common mistake that can cause problems using inline lambda expressions when adding and removing event handlers. Looking at these examples, I thought this would be a perfect case for a code analyzer.

So, the problematic code we are trying to catch is:

source.ProgressChanged += (_, message) => Console.WriteLine(message);
source.ProgressChanged -= (_, message) => Console.WriteLine(message);

To start, we are going to register an action for all AddAssignmentExpression and SubtractAssignmentExpression nodes.

context.RegisterSyntaxNodeAction(AnalyzeSyntax, SyntaxKind.AddAssignmentExpression, SyntaxKind.SubtractAssignmentExpression);

In our method, we are going to grab the node and determine the type so we can use it in our message later.

var assignmentNode = context.Node as AssignmentExpressionSyntax;
string assignmentType = "";

if (assignmentNode.IsKind(SyntaxKind.AddAssignmentExpression))
    assignmentType = "+=";
else if (assignmentNode.IsKind(SyntaxKind.SubtractAssignmentExpression))
    assignmentType = "-=";
else
    return;

Next, we will look at the right hand side of the operation and check if it is a lambda expression. If not, we don't need to analyze any further.

if (!assignmentNode.Right.IsKind(SyntaxKind.ParenthesizedLambdaExpression))
    return;

Finally, if we are at this point, we are either in a += or -= operation with the right hand side being a lambda expression, so we can now raise the diagnostic:

 context.ReportDiagnostic(Diagnostic.Create(Rule, assignmentNode.GetLocation(), assignmentType));

That's all there is to it. The full analyzer code is:

using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.CodeAnalysis.Diagnostics;
using System.Collections.Immutable;

namespace EventAnalyzer
{
    [DiagnosticAnalyzer(LanguageNames.CSharp)]
    public class EventAnalyzerAnalyzer : DiagnosticAnalyzer
    {
        public const string DiagnosticId = "InlineDelegateEventAnalyzer";

        // You can change these strings in the Resources.resx file. If you do not want your analyzer to be localize-able, you can use regular strings for Title and MessageFormat.
        private static readonly LocalizableString Title = "Don't use += and -= with inline lambda expressions for events";
        private static readonly LocalizableString MessageFormat = "Don't use {0} with inline lambda expressions for events"; 
        private static readonly LocalizableString Description = "Don't use += and -= with inline lambda expressions for events.  If you add an event with a += and an inline lambda expression there is no way to properly remove the handler.  Using -= with an inline lambda expression will not remove an event handler.";
        private const string Category = "Usage";

        private static DiagnosticDescriptor Rule = new DiagnosticDescriptor(DiagnosticId, Title, MessageFormat, Category, DiagnosticSeverity.Warning, isEnabledByDefault: true, description: Description);

        public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics { get { return ImmutableArray.Create(Rule); } }

        public override void Initialize(AnalysisContext context)
        {
            context.RegisterSyntaxNodeAction(AnalyzeSyntax, SyntaxKind.AddAssignmentExpression, SyntaxKind.SubtractAssignmentExpression);
        }

        private static void AnalyzeSyntax(SyntaxNodeAnalysisContext context)
        {
            var assignmentNode = context.Node as AssignmentExpressionSyntax;
            string assignmentType = "";

            if (assignmentNode.IsKind(SyntaxKind.AddAssignmentExpression))
                assignmentType = "+=";
            else if (assignmentNode.IsKind(SyntaxKind.SubtractAssignmentExpression))
                assignmentType = "-=";
            else
                return;

            if (!assignmentNode.Right.IsKind(SyntaxKind.ParenthesizedLambdaExpression))
                return;

            context.ReportDiagnostic(Diagnostic.Create(Rule, assignmentNode.GetLocation(), assignmentType));
        }
    }
}

Thanks to Bill for providing the inspiration for this analyzer and to Dan Smith for pointing me to the article. As you can see, it is very easy to create analyzers for specific situations when you know what to look for.

Analyzing Dictionaries with String Keys that are Created Using ToDictionary

In the past we have covered working with types in your analyzer. We even built an analyzer that looked at generic types to work with dictionaries that have string keys to ensure they specified a string comparer. Since we already have that built, let's modify the existing analyzer to handle the ToDictionary call on an IEnumerable.

Let's start with the existing code for the analyzer:

public override void Initialize(AnalysisContext context)
{
    context.RegisterCompilationStartAction(compilationContext =>
    {
        var dictionaryTokenType = compilationContext.Compilation.GetTypeByMetadataName("System.Collections.Generic.Dictionary`2");
        var equalityComparerInterfaceType = compilationContext.Compilation.GetTypeByMetadataName("System.Collections.Generic.IEqualityComparer`1");

        if (dictionaryTokenType != null)
        {
            compilationContext.RegisterSyntaxNodeAction(symbolContext =>
            {
                var creationNode = (ObjectCreationExpressionSyntax)symbolContext.Node;
                var variableTypeInfo = symbolContext.SemanticModel.GetTypeInfo(symbolContext.Node).ConvertedType as INamedTypeSymbol;

                if (variableTypeInfo == null)
                    return;

                if (!variableTypeInfo.OriginalDefinition.Equals(dictionaryTokenType))
                    return;

                // We only care about dictionaries who use a string as the key
                if (variableTypeInfo.TypeArguments.First().SpecialType != SpecialType.System_String)
                    return;

                var arguments = creationNode.ArgumentList?.Arguments;

                if (arguments == null || arguments.Value.Count == 0)
                {
                    symbolContext.ReportDiagnostic(Diagnostic.Create(Rule, symbolContext.Node.GetLocation()));
                    return;
                }

                bool hasEqualityComparer = false;
                foreach (var argument in arguments)
                {
                    var argumentType = symbolContext.SemanticModel.GetTypeInfo(argument.Expression);

                    if (argumentType.ConvertedType == null)
                        return;

                    if (argumentType.ConvertedType.OriginalDefinition.Equals(equalityComparerInterfaceType))
                    {
                        hasEqualityComparer = true;
                        break;
                    }
                }

                if (!hasEqualityComparer)
                {
                    symbolContext.ReportDiagnostic(Diagnostic.Create(Rule, symbolContext.Node.GetLocation()));
                }
            }, SyntaxKind.ObjectCreationExpression);
        }
    });
}

In order to modify this code, we need to do a few things. First, we need to change from using ObjectCreatoinExpressoin to InvocationExpression. Simply modify the syntax kind on the RegisterSyntaxNodeAction and update the cast to use an InvocationExpressionSyntax (Note, I also changed the variable name for clarity):

compilationContext.RegisterSyntaxNodeAction(symbolContext =>
{
    var invocationNode = (InvocationExpressionSyntax)symbolContext.Node;
    /* rest of the analyzer */
}, SyntaxKind.InvocationExpression);

With that done, you can now run the code and it will catch all ToDictionary calls that return a Dictionary<string, X>, unfortunately, it will also catch any other method calls that happen to return a dictionary with a string key. To fix this, we need to check the method invocation to make sure it is a ToDictionary call. To do this, we can access the expression from the invocationNode and check the identifier text to see if is ToDictionary:

var expression = invocationNode.Expression as MemberAccessExpressionSyntax;
var name = expression?.Name.Identifier.Text;

if (name == null || !name.Equals("ToDictionary"))
{
    return;
}

That's it. The full code for the analyzer is:

public override void Initialize(AnalysisContext context)
{
    context.RegisterCompilationStartAction(compilationContext =>
    {
        var dictionaryTokenType = compilationContext.Compilation.GetTypeByMetadataName("System.Collections.Generic.Dictionary`2");
        var equalityComparerInterfaceType = compilationContext.Compilation.GetTypeByMetadataName("System.Collections.Generic.IEqualityComparer`1");

        if (dictionaryTokenType != null)
        {
            compilationContext.RegisterSyntaxNodeAction(symbolContext =>
            {
                var invocationNode = (InvocationExpressionSyntax)symbolContext.Node;
                var variableTypeInfo = symbolContext.SemanticModel.GetTypeInfo(symbolContext.Node).ConvertedType as INamedTypeSymbol;

                if (variableTypeInfo == null)
                    return;

                if (!variableTypeInfo.OriginalDefinition.Equals(dictionaryTokenType))
                    return;

                // We only care about dictionaries who use a string as the key
                if (variableTypeInfo.TypeArguments.First().SpecialType != SpecialType.System_String)
                    return;

                var expression = invocationNode.Expression as MemberAccessExpressionSyntax;
                var name = expression?.Name.Identifier.Text;

                if (name == null || !name.Equals("ToDictionary"))
                {
                    return;
                }

                var arguments = invocationNode.ArgumentList?.Arguments;

                if (arguments == null || arguments.Value.Count == 0)
                {
                    symbolContext.ReportDiagnostic(Diagnostic.Create(Rule, symbolContext.Node.GetLocation()));
                    return;
                }

                bool hasEqualityComparer = false;
                foreach (var argument in arguments)
                {
                    var argumentType = symbolContext.SemanticModel.GetTypeInfo(argument.Expression);

                    if (argumentType.ConvertedType == null)
                        return;

                    if (argumentType.ConvertedType.OriginalDefinition.Equals(equalityComparerInterfaceType))
                    {
                        hasEqualityComparer = true;
                        break;
                    }
                }

                if (!hasEqualityComparer)
                {
                    symbolContext.ReportDiagnostic(Diagnostic.Create(Rule, symbolContext.Node.GetLocation()));
                }
            }, SyntaxKind.InvocationExpression);
        }
    });
}

That is all you need to update to get a functioning analyzer that will handle the ToDictionary calls on an IEnumerable to ensure that you have dictionaries that explicitly define their string comparers. Obviously, this analyzer isn't perfect as it technically catches any method called ToDictionary regardless if the method is the IEnumerable ToDictionary or some other ToDictionary. I'll leave that fix as an exercise for the reader :).

Creating Your First Code Refactoring in Visual Studio 2015

One of the great new extensibility features of Visual Studio 2015 is the ability to create refactorings. This allows you to put an action in the Quick Actions menu to provide the user a quick change to the code they are working on. First, you need to install the Visual Studio Extensibility Templates installed in your instance of Visual Studio. You can install this from the Tools->Extensions and Updates menu inside Visual Studio.

Once you have the templates installed, you can create a new refactoring as a new project. Select the Extensibility tab from the new project dialog and create a new Code Refactoring (VSIX) project.

NewCodeRefactoring

Creating this project will create a refactoring that will give you the option to reverse the name of any type declaration. So you can just run the project, select any type declaration and hit your quick actions key (Ctrl . by default) and you will see the Reverse type name action available.

ReverseTypeName

Now that we see that it works, let's look more into how it works. Every code refactoring overrides the ComputeRefactoringsAsync method. Inside this method, you compute the nodes in the syntax tree on which you want to provide refactoring options. Looking at the refactoring that reverses type names, it starts by finding the node from the SyntaxTree.

var root = await context.Document.GetSyntaxRootAsync(context.CancellationToken).ConfigureAwait(false);

// Find the node at the selection.
var node = root.FindNode(context.Span);

Next, it checks the type of the node to see if it is a TypeDeclarationSyntax. If not, it will simply abort the refactoring registration.

// Only offer a refactoring if the selected node is a type declaration node.
var typeDecl = node as TypeDeclarationSyntax;
if (typeDecl == null)
{
    return;
}

Finally, it will register a refactoring using the RegisterRefactoring on the CodeRefactoringContext. It registers a delegate that returns a Task<Solution>, which updates the solution with refactored code.

// For any type declaration node, create a code action to reverse the identifier text.
var action = CodeAction.Create("Reverse type name", c => ReverseTypeNameAsync(context.Document, typeDecl, c));

// Register this code action.
context.RegisterRefactoring(action);

Doing this is enough to get the refactoring to appear in the quick actions dialog, but to actually get it to do something, we need to dig into the ReverseTypeNameAsync method.

This method starts by computing the newly refactored name for the node

// Produce a reversed version of the type declaration's identifier token.
var identifierToken = typeDecl.Identifier;
var newName = new string(identifierToken.Text.ToCharArray().Reverse().ToArray());

Next it finds the symbol that represents the node in the semantic model, so it can properly rename the symbol across the solution.

// Get the symbol representing the type to be renamed.
var semanticModel = await document.GetSemanticModelAsync(cancellationToken);
var typeSymbol = semanticModel.GetDeclaredSymbol(typeDecl, cancellationToken);

Finally, it creates a new solution (remember these objects are immutable) with the renamed symbol. This is done using the Renamer object from the Microsoft.CodeAnalysis.Rename namespace. This object takes care of all of the heavy lifting of renaming an object and ensuring that all references in the solution are properly renamed.

// Produce a new solution that has all references to that type renamed, including the declaration.
var originalSolution = document.Project.Solution;
var optionSet = originalSolution.Workspace.Options;
var newSolution = await Renamer.RenameSymbolAsync(document.Project.Solution, typeSymbol, newName, optionSet, cancellationToken).ConfigureAwait(false);

// Return the new solution with the now-uppercase type name.
return newSolution;

That's all there is to it. As you can see, creating a refactoring in Visual Studio 2015 is a very easy task and gives you a lot of control in providing quick fixes to code. You can use this in a myriad of ways to make you and your team more productive and more consistent when using Visual Studio 2015.

Using the Diagnostics Tools in Visual Studio 2015 (Video)

Here is a video I created about using the new Diagnostics Tools window that is available as part of Visual Studio 2015. It covers the basics around Intellitrace Events, Historical Debugging, Memory Usage, and Snapshot Comparison.

Send a Smile to the Visual Studio Team

I have been using Visual Studio 2015 since the early betas and one feature that often gets overlooked is the Send a Smile & Send a Frown buttons. These features allow you to quickly send feedback to Microsoft about your experiences in Visual Studio. When you click the Send a Smile/Frown button, you are presented with a dialog that allows you to send a screenshot and comments to the team to help them resolve the issue.

SendAFrown

Once you fill out this form, you will be presented with a second form that gives you the ability to tag the issue with additional information, as well as categorize it as a crash, regression, slow performance issue, or other. Depending on which of these options you choose, you will be given the option to record steps to reproduce the issue or attach a file that makes the reproduction of the issue easier for the team.

AdditionlFeedback

The reason I like this feature is that you often get feedback directly from the Visual Studio team. I have probably submitted 10 or more send a frown feedbacks and have received feedback on at least half of them.

SendASmileInbox

If you see something great in Visual Studio, send a smile. If you run into an issue, send a frown. If you are cordial and truly want feedback on your issue, I think you will be pleasantly surprised at how responsive the entire Visual Studio team is. Kudos to the entire Visual Studio team for rallying behind this feature and helping customers get past issues that arise out in the field.

Analyzing Indexers using Roslyn

I've occasionally come across code that just really bothers me. One such case that I have seen is the abuse of indexers. I have seen some programmers use this as a "free" method call that doesn't require a name. They load it up with parameters and it just doesn't feel right. For example, some code that looks like this:

public int this[string s, int i, long l]
{
    get
    {
        // Do some weird stuff here
        return s.GetHashCode();
    }
}

This code is perfectly legal C#, but can be very confusing for someone coming into the code for the first time. Let's create an analyzer that raises a diagnostic any time there is more than one parameter to the indexer to mitigate the creation of this monstrosity. To start, we'll register a SyntaxNodeAction forSyntaxKind.IndexerDeclaration` nodes.

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxNodeAction((syntaxNodeContext) =>
    {
    }, SyntaxKind.IndexerDeclaration);
}

Next, we can get the node and check the ParameterList.Parameters and check the Count on that object.

var node = syntaxNodeContext.Node as IndexerDeclarationSyntax;

if (node == null)
    return;

var paramCount = node.ParameterList.Parameters.Count();

Finally, we can raise a diagnostic if the parameter count is greater than 1. We'll also extract the class name from the indexer's parent, to provide a more meaningful diagnostic message.

if (paramCount > 1)
{
    var className = (node.Parent as ClassDeclarationSyntax)?.Identifier.ToFullString().Trim();
    syntaxNodeContext.ReportDiagnostic(Diagnostic.Create(Rule, node.ThisKeyword.GetLocation(), className));
}

So, the full diagnostic looks like this:

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxNodeAction((syntaxNodeContext) =>
    {
        var node = syntaxNodeContext.Node as IndexerDeclarationSyntax;

        if (node == null)
            return;

        var paramCount = node.ParameterList.Parameters.Count();

        if (paramCount > 1)
        {
            var className = (node.Parent as ClassDeclarationSyntax)?.Identifier.ToFullString().Trim();
            syntaxNodeContext.ReportDiagnostic(Diagnostic.Create(Rule, node.ThisKeyword.GetLocation(), className));
        }
    }, SyntaxKind.IndexerDeclaration);
}

As you can see, once you know how to create an analyzer, you can easily create them to ensure the code you and your team are writing follow certain standards. You can also start creating a suite of standards that you like to follow to ensure you don't fall into any bad habits.

Analyzing Memory Usage in Visual Studio 2015

The new diagnostic tools in Visual Studio 2015, provide a great deal of troubleshooting tools to help debug various problems in your application. One piece that I think is really nice is the memory usage information and the ability to snapshot memory and compare differences. If you are trying to location a memory leak in a certain area of code, you can take a snapshot before and after running the code and see the delta of objects and their sizes.

Let's create a simple example that never frees up memory to show how we can use these tools. We'll continually create strings in a loop and add them to a hashset, never freeing them. Our string in y is going to continue to grow the more times the loop is executed.

static void Main(string[] args)
{
    var strings = new HashSet<string>();
    string y = "";
    for (int i=0; i<50000; i++)
    {
        y = y + i.ToString();
        strings.Add(y);

        Console.WriteLine(i);
        System.Threading.Thread.Sleep(1);
    }
} 

Now, if I start the application, the Diagnostics Tools window appears and within 10 seconds, my application is already consuming 150MB of RAM and growing quickly.

DiagnosticTools

If you open the Memory Usage tab, you'll see a button that allows you to take snapshots. If you click this a few times, you'll get a few memory snapshots. These are indicated by blue triangles in the memory graph. You'll also see them listed in below the Take Snapshot button.

DiagnosticToolsWithSnapshots

Inisde the list of memory snapshots, you'll notice that Objects and Heapsize columns have blue text, which is clickable. This will take you into a detail view for the snapshot. For the Objects view, you will see all of the object types that are in memory and metrics around the number and size of objects. You can sort by the column headers to find if you have a lot of small objects that add up to a lot of memory or one large object eating it all up. In our example, if I sort by size, it's very obvious what the culprit is.

HeapObjects

Inside this view, there is also the Compare To drop down, which allows you to compare the current snapshot to another snapshot to see the deltas between the two snapshots. So, if I select a previous snapshot, I get some additional columns that provide the differences from the baseline snapshot to the currently selected snapshot. Again, in my example it is very obvious which object is growing, but these deltas may be very useful when the leak is not so obvious.

HeapObjectsWithBaseline

You can double click on any object in the snapshot and drill into it further. You'll see that there are Paths to Root and Referenced Types views available. The Paths To Root view gives you the ability to figure out what is holding on to your object and the reference counts. This view lets you walk up the entire parent tree of objects that have references. This may help you figure out the root cause of what object isn't being released that needs to be.

The Referenced Types view is a downward view of all of the objects that this object is holding on to. Since our example involves a HashSet, we can see all of the instances of the strings that it is holding on to. If you pause the program and use a snapshot of the current state of the program, you can even see the values inside the objects inside the HashSet.

ReferencedObjects

As you can see, inside that little diagnostic tools dialog, there are a lot of really nice features that can make tracking down memory problems just a little bit easier. Do you have a tip on using the diagnostic tools? If so, leave a comment below.

Working with Namespace Names in a Code Analyzer

Let's imagine we wanted to create an analyzer that compared the name of the namespace to the path of the folder in which the file it resides. If the namespace doesn't match the folder structure, then we should raise a diagnostic. As we have seen in a previous post, you can get at the file name for the code you are analyzing from a SyntaxTreeAction. To start, we'll register a new SyntaxTreeAction and get the file path. We can also take that path and turn it into something that resembles a namespace declaration:

#!cs
compilationSyntax.RegisterSyntaxTreeAction((syntaxTreeContext) =>
{
    var filePath = syntaxTreeContext.Tree.FilePath;

    if (filePath == null)
        return;

    var parentDirectory = System.IO.Path.GetDirectoryName(filePath);

    // This will only work on windows and is not very robust.
    var parentDirectoryWithDots = parentDirectory.Replace("\\", ".");
}

Now that we have the path to use for comparison, we can get the namespaces in the file and then perform the comparison. We can get all of the namespaces in a SyntaxTree by using the OfType extension method and find all NamespaceDeclarationSyntax node types. Note: Make sure you use DescendantNodes and not ChildNodes to ensure you get all nested namespace declarations.

#!cs
var namespaceNodes = syntaxTreeContext.Tree.GetRoot().DescendantNodes().OfType<NamespaceDeclarationSyntax>();

We can loop over those nodes and compare the name to the folder path:

#!cs
foreach (var ns in namespaceNodes)
{
    var name = ????

    if (!parentDirectoryWithDots.EndsWith(name, StringComparison.OrdinalIgnoreCase))
    {
        syntaxTreeContext.ReportDiagnostic(Diagnostic.Create(
           Rule, ns.Name.GetLocation(), parentDirectoryWithDots));
    }
}

You might be tempted to just use the ns.ToFullString() method to get the name of the namespace, however, this does not work for nested namespaces. For example, if we have the code:

#!cs
namespace Foo
{
    namespace Bar
    {
    }
}

The full namespace for Bar is Foo.Bar, but the ns.ToFullString() method will only return Bar. To properly get the full name, we need to ask our friend the SemanticModel for the Symbol for this SyntaxNode and then we can get the DisplayString from there. Getting the semantic model requires a little massaging of our registration code.

The SemanticModel is available off of the Compilation object. As we discussed in the "Working with Types in Your Analyzer" post, you can access the Compilation object by registering for a CompilationStartAction and then registering your other actions from that context.

So our registration code, can be modified to:

#!cs
context.RegisterCompilationStartAction((compilationContext) =>
{
    compilationContext.RegisterSyntaxTreeAction((syntaxTreeContext) =>
    {
        var semModel = compilationContext.Compilation.GetSemanticModel(syntaxTreeContext.Tree);
        //...
    });
});

Then the code to check the namespace name can use the semantic model to get the INamespaceSymbol and get the display string from that.

#!cs
foreach (var ns in namespaceNodes)
{
    var symbolInfo = semModel.GetDeclaredSymbol(ns) as INamespaceSymbol;
    var name = symbolInfo.ToDisplayString();

    if (!parentDirectoryWithDots.EndsWith(name, StringComparison.OrdinalIgnoreCase))
    {
        syntaxTreeContext.ReportDiagnostic(Diagnostic.Create(
           Rule, ns.Name.GetLocation(), parentDirectoryWithDots));
    }
}

So, in the end, the full analyzer method looks like this:

#!cs
public override void Initialize(AnalysisContext context)
{
    context.RegisterCompilationStartAction((compilationContext) =>
    {
        compilationContext.RegisterSyntaxTreeAction((syntaxTreeContext) =>
        {
            var semModel = compilationContext.Compilation.GetSemanticModel(syntaxTreeContext.Tree);
            var filePath = syntaxTreeContext.Tree.FilePath;

            if (filePath == null)
                return;

            var parentDirectory = System.IO.Path.GetDirectoryName(filePath);

            // This will only work on windows and is not very robust.
            var parentDirectoryWithDots = parentDirectory.Replace("\\", ".");

            var namespaceNodes = syntaxTreeContext.Tree.GetRoot().DescendantNodes().OfType<NamespaceDeclarationSyntax>();

            foreach (var ns in namespaceNodes)
            {

                var symbolInfo = semModel.GetDeclaredSymbol(ns) as INamespaceSymbol;
                var name = symbolInfo.ToDisplayString();

                if (!parentDirectoryWithDots.EndsWith(name, StringComparison.OrdinalIgnoreCase))
                {
                    syntaxTreeContext.ReportDiagnostic(Diagnostic.Create(
                       Rule, ns.Name.GetLocation(), parentDirectoryWithDots));
                }
            }
        });
    });
}

And that is everything. I think this post really shows some of the power of using the SyntaxTree and the SemanticModel together to put together an analyzer that can process real world code. This post was inspired by a question I answered on StackOverflow and took a few iterations to get right. I thought I would share the process of how I got there so others can learn from it. If you have done something written a similar analyzer or have great examples of using the SemanticModel and SyntaxTree together, let's talk about it in the comments below.

Creating Code Using the Syntax Factory

There are times when it is necessary to generate code and in Visual Studio 2015, the Roslyn compiler gives you a lot of power to do this.

Let's say we want to output Console.WriteLine("A");. The easiest way to start on this is to open the Syntax Visualizer and see what types of nodes make up the expression that represents our console write line invocation.

ConsoleWritelineExpression

As you can see, it is an ExpressionStatement that contains an InvocationExpression, which in turn contains a SimpleMemberAccessExpression and an ArgumentList.

Now if you look at the methods available on the Microsoft.CodeAnalysis.CSharp.SyntaxFactory, you will seem methods that can create the nodes we see in the syntax graph. For example, to create the invocation expression, you would use the SyntaxFactory.InvocationExpression method. Very often when people create these methods you will see a lot of methods calls on a single line of code. For example, the Roslyn Quoter Project on GitHub, generates the following code to do our Console.WriteLine("A");.

return SyntaxFactory.ExpressionStatement(
           SyntaxFactory.InvocationExpression(
               SyntaxFactory.MemberAccessExpression(
                   SyntaxKind.SimpleMemberAccessExpression,
                   SyntaxFactory.IdentifierName(
                       @"Console"),
                   SyntaxFactory.IdentifierName(
                       @"WriteLine"))
               .WithOperatorToken(
                   SyntaxFactory.Token(
                       SyntaxKind.DotToken)))
           .WithArgumentList(
               SyntaxFactory.ArgumentList(
                   SyntaxFactory.SingletonSeparatedList<ArgumentSyntax>(
                       SyntaxFactory.Argument(
                           SyntaxFactory.LiteralExpression(
                               SyntaxKind.StringLiteralExpression,
                               SyntaxFactory.Literal(
                                   SyntaxFactory.TriviaList(),
                                   @"""A""",
                                   @"""A""",
                                   SyntaxFactory.TriviaList())))))
               .WithOpenParenToken(
                   SyntaxFactory.Token(
                       SyntaxKind.OpenParenToken))
               .WithCloseParenToken(
                   SyntaxFactory.Token(
                       SyntaxKind.CloseParenToken))));

The code is generated, so it is a little verbose, but it provides us a great starting point to know what work needs to be done. We can refactor that a bit to get something that is more consumable and debuggable:

static void Main(string[] args)
{
    var console = SyntaxFactory.IdentifierName("Console");
    var writeline = SyntaxFactory.IdentifierName("WriteLine");
    var memberaccess = SyntaxFactory.MemberAccessExpression(SyntaxKind.SimpleMemberAccessExpression, console, writeline);

    var argument = SyntaxFactory.Argument(SyntaxFactory.LiteralExpression(SyntaxKind.StringLiteralExpression, SyntaxFactory.Literal("A")));
    var argumentList = SyntaxFactory.SeparatedList(new[] { argument });

    var writeLineCall =
        SyntaxFactory.ExpressionStatement(
        SyntaxFactory.InvocationExpression(memberaccess,
        SyntaxFactory.ArgumentList(argumentList)));

    var text = writeLineCall.ToFullString();

    Console.WriteLine(text);
    Console.ReadKey();
}

If you look at the syntax tree image, you'll see that we are just going to build up the syntax nodes (in blue in the tree) from the bottom up. The first two lines are simply creating variables to hold the IdentifierName syntax nodes in the tree. Next, we create the SimpleMemberAccessExpression and pass it the two identifiers. Next we create an Argument with a Literal of "A". We wrap that in a list that we can later pass to the SyntaxFactory.ArgumentList. Finally we use all of the nodes we created previously to build up the complete syntax tree, by calling the appropriate methods on the SyntaxFactory.

You are now armed with the power of creating code using Roslyn's Syntax Factory. Use it wisely (and sparingly). Special thanks to Kirill Osenkov for creating the Roslyn Quoter project and making it available. That tools is invaluable when getting started using the SyntaxFactory.

Checking the Name of the File You are Analyzing

There are times when writing an analyzer that you may also want to get information about the file that contains some code you are analyzing. This information is available from Roslyn in the SyntaxTree and provides access to things like the file encoding and the file name.

Lets say we wanted to create an analyzer that ensured all of the compilable files in our solution had file names that started with an uppercase letter. To do this, we would first register a SyntaxTreeAction:

context.RegisterSyntaxTreeAction((syntaxTreeContext) =>
{
});

Inside that action, we will get the file path and ensure it is not null. It could be null for a few reasons, mainly when dynamically compiling snippets of code, so we will just guard against that:

var filePath = syntaxTreeContext.Tree.FilePath;

if (filePath == null)
    return;

Next, we can use the System.IO.Path class to just get the filename and check if the first character is a lowercase letter. If it is a lowercase letter, then we will raise a diagnostic:

var fileName = System.IO.Path.GetFileNameWithoutExtension(filePath);

if (Char.IsLower(fileName.ToCharArray()[0]))
    syntaxTreeContext.ReportDiagnostic(Diagnostic.Create(Rule,
           Location.Create(syntaxTreeContext.Tree, TextSpan.FromBounds(0, 0)), fileName));

So, at the end, our diagnostic looks like this:

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxTreeAction((syntaxTreeContext) =>
    {
        var filePath = syntaxTreeContext.Tree.FilePath;

        if (filePath == null)
            return;

        var fileName = System.IO.Path.GetFileNameWithoutExtension(filePath);

        if (Char.IsLower(fileName.ToCharArray()[0]))
            syntaxTreeContext.ReportDiagnostic(Diagnostic.Create(Rule, 
                Location.Create(syntaxTreeContext.Tree, TextSpan.FromBounds(0, 0)), fileName));
    });
}

Those are the basics for accessing file information from an analyzer. If you want to see an example of an analyzer that looks at the encoding of the file, you can look at SA1412StoreFilesAsUtf8 in the StyleCopAnalyzers GitHub project.

Creating a Nuget Package For Your Analyzer

Once you have developed a nice set of analyzers, you may want to share them with the world. Nuget is the most common way of sharing libraries these days and in Visual Studio 2015, you can add analyzers via Nuget packages. If you start with the Visual Studio templates to create your analyzers and you used the Analyzer With Code Fix (Nuget + VSIX) template, then your project is already generating the Nuget package for you. Just look in your output directory and you should see a {ProjectName}.Nupkg file.

To see how this is implemented, we can first look at the post build step for the analyzer project. You will see that it calls out to nuget.exe to pack using a .nuspec file.

"$(SolutionDir)\packages\NuGet.CommandLine.2.8.2\tools\NuGet.exe" pack Diagnostic.nuspec -NoPackageAnalysis -OutputDirectory .

Looking into the Diagnostic.nuspec file in the project, we see the following.

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>AnalyzerSamples</id>
    <version>1.0.0.0</version>
    <title>AnalyzerSamples</title>
    <authors>AnalyzerSamples</authors>
    <owners>AnalyzerSamples</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>AnalyzerSamples</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright</copyright>
    <tags>AnalyzerSamples, analyzers</tags>
    <frameworkAssemblies>
      <frameworkAssembly assemblyName="System" targetFramework="" />
    </frameworkAssemblies>
  </metadata>
  <files>
    <file src="*.dll" target="tools\analyzers\" exclude="**\Microsoft.CodeAnalysis.*;**\System.Collections.Immutable.*;**\System.Reflection.Metadata.*" />
    <file src="tools\*.ps1" target="tools\" />
  </files>
</package>

The metadata section of the Diagnostic.nuspec is pretty standard. If you are unsure what values to provide, the Nuspec Reference page can provide you guidance. The more interesting part of this .nuspec is the files section. The first file entry copies the output DLLs to the tools\analyzers output folder. This is a convention for analyzers as outlined in the Nuget Analayzers Conventions documentation. If you need to target specific languages or framework versions with your analyzer, you can use a more complex folder structure as described in the conventions document. For consumers using Nuget 3 with a project.json, these conventions are all that are needed to install your analyzers. However, for Nuget 2 users with a packages.config file, you will need the second file entry as described below.

The second file entry copies a few PowerShell scripts to your nuget package that will be run on the install and uninstall of your project. The project contains two scripts install.ps1 and uninstall.ps1 which are run when your analyzer package is installed and uninstalled respectively. For more information on PowerShell scripts in nuget packages, see the Creating and Publishing a Package documentation.

Now we can look into the install and uninstall scripts to see what they do. The install.ps1 script starts with the following

param($installPath, $toolsPath, $package, $project)

$analyzersPath = join-path $toolsPath "analyzers"

# Install the language agnostic analyzers.
foreach ($analyzerFilePath in Get-ChildItem $analyzersPath -Filter *.dll)
{
    if($project.Object.AnalyzerReferences)
    {
        $project.Object.AnalyzerReferences.Add($analyzerFilePath.FullName)
    }
}

This determines the analyzers path using the $toolsPath variable passed into the function and then it loops over all of the DLLs in the $toolsPath\analzyers folder and calls the AnalyzerReferences.Add method to add the DLL as an analyzer reference. The next section of the install.ps1 does the following:

# Install language specific analyzers.
# $project.Type gives the language name like (C# or VB.NET)
$languageAnalyzersPath = join-path $analyzersPath $project.Type

foreach ($analyzerFilePath in Get-ChildItem $languageAnalyzersPath -Filter *.dll)
{
    if($project.Object.AnalyzerReferences)
    {
        $project.Object.AnalyzerReferences.Add($analyzerFilePath.FullName)
    }
}

This section of the script basically does the same thing as the first, except that it looks in the $toolsPath\analzyers\{language} folder for all DLLs and then adds them as analyzers to the project.

The uninstall.ps1 just does the reverse of the install and calls the Remove method on the AnalyzerReferences object.

That's it. When you build, the nuget package is generated for you. If you want to push it up to the official nuget servers, you can follow the documentation in the Publishing in Nuget Gallery section of the Creating and Publishing A Package nuget documentation. As you can see, creating a Nuget package for your analyzer is very easy and if you are taking the time to create quality analyzers, then you should put forth the extra effort to make them as easy to install and use as possible.

Creating an Analyzer For Sealing Classes

I was talking with a co-worker today when the topic of code analyzers came up. He stated that one analyzer he would really like to see is one that warns you if you are not sealing classes that do not explicitly define abstract or virtual methods. This is somewhat of a religious debate among developers. If you don't believe me, just read the comments on Eric Lippert's post on the subject. I am not going to debate the validity of the claims on either side, rather I am going to focus on how one would implement an analyzer to see if a class should be sealed if it does not explicitly define functionality to be extended.

To start, I will register an analyzer to look at all class declarations.

context.RegisterSyntaxNodeAction((syntaxNodeContext) =>
{
} , SyntaxKind.ClassDeclaration);

Next, I will check if the class is static using the Modifiers property. If it is, I won't bother analyzing it:

var node = syntaxNodeContext.Node as ClassDeclarationSyntax;

// We don't care about sealing static classes
if (node.Modifiers.Where(x => x.IsKind(SyntaxKind.StaticKeyword)).Any())
    return;

We can also rule out any classes that are already sealed by checking the Modifiers for the SealedKeyword:

// The class is already sealed, no reason to analyze it
if (node.Modifiers.Where(x => x.IsKind(SyntaxKind.SealedKeyword)).Any())
    return;

At this point, we have a non-static, non-sealed class, so we need to check the points that can be extended via inheritance. Based on the MSDN docs for abstract and virtual, we know that we need to check methods, properties, events, and indexers. We can get those from the Members of the class:

var methods = node.Members.Where(x => x.IsKind(SyntaxKind.MethodDeclaration));
var props = node.Members.Where(x => x.IsKind(SyntaxKind.PropertyDeclaration));
var events = node.Members.Where(x => x.IsKind(SyntaxKind.EventDeclaration));
var indexers = node.Members.Where(x => x.IsKind(SyntaxKind.IndexerDeclaration));

Next, we just have to loop over those declarations to determine if any of them are abstract or virtual. If any of them are, then we know we don't need to raise a diagnostic:

foreach (var m in methods)
{
    var modifiers = (m as MethodDeclarationSyntax)?.Modifiers.Where(x=>x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
    if (modifiers != null && modifiers.Any())
        return;
}

foreach (var p in props)
{
    var modifiers = (p as PropertyDeclarationSyntax)?.Modifiers.Where(x => x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
    if (modifiers != null && modifiers.Any())
        return;
}

foreach (var e in events)
{
    var modifiers = (e as EventDeclarationSyntax)?.Modifiers.Where(x => x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
    if (modifiers != null && modifiers.Any())
        return;
}

foreach (var i in indexers)
{
    var modifiers = (i as IndexerDeclarationSyntax)?.Modifiers.Where(x => x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
    if (modifiers != null && modifiers.Any())
        return;
}

Finally, if we have gone through all of the possible inheritance points and still have not hit an explicit declaration of intent for inheritance, we can raise our diagnostic:

// We got here, so there are no abstract or virtual methods/properties/events/indexers
syntaxNodeContext.ReportDiagnostic(Diagnostic.Create(Rule, node.GetLocation()));

So, our full diagnostic is:

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxNodeAction((syntaxNodeContext) =>
    {
        var node = syntaxNodeContext.Node as ClassDeclarationSyntax;

        // We don't care about sealing static classes
        if (node.Modifiers.Where(x => x.IsKind(SyntaxKind.StaticKeyword)).Any())
            return;

        // The class is already sealed, no reason to analyze it
        if (node.Modifiers.Where(x => x.IsKind(SyntaxKind.SealedKeyword)).Any())
            return;

        var methods = node.Members.Where(x => x.IsKind(SyntaxKind.MethodDeclaration));
        var props = node.Members.Where(x => x.IsKind(SyntaxKind.PropertyDeclaration));
        var events = node.Members.Where(x => x.IsKind(SyntaxKind.EventDeclaration));
        var indexers = node.Members.Where(x => x.IsKind(SyntaxKind.IndexerDeclaration));

        foreach (var m in methods)
        {
            var modifiers = (m as MethodDeclarationSyntax)?.Modifiers.Where(x=>x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
            if (modifiers != null && modifiers.Any())
                return;
        }

        foreach (var p in props)
        {
            var modifiers = (p as PropertyDeclarationSyntax)?.Modifiers.Where(x => x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
            if (modifiers != null && modifiers.Any())
                return;
        }

        foreach (var e in events)
        {
            var modifiers = (e as EventDeclarationSyntax)?.Modifiers.Where(x => x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
            if (modifiers != null && modifiers.Any())
                return;
        }

        foreach (var i in indexers)
        {
            var modifiers = (i as IndexerDeclarationSyntax)?.Modifiers.Where(x => x.IsKind(SyntaxKind.AbstractKeyword) || x.IsKind(SyntaxKind.VirtualKeyword));
            if (modifiers != null && modifiers.Any())
                return;
        }

        // We got here, so there are no abstract or virtual methods/properties/events/indexers
        syntaxNodeContext.ReportDiagnostic(Diagnostic.Create(Rule, node.GetLocation()));

    } , SyntaxKind.ClassDeclaration);
}

I have also posted the full diagnostic in my analyzer samples project on GitHub.

Despite what side of the argument you are for in this debate, you can see how to create an analyzer that would allow catch classes that are not intended for inheritance and ensure they are not inheritable. It is up to you whether you would include this analyzer in your list of enabled analyzers.

Creating a Stand-Alone Code Analyzer

Many of my past posts have covered the process of creating an analyzer that is part of a Visual Studio addin or nuget package. However, there are times that you might want to just create a command line utility to do some analysis of a project. This type of project is very useful in prototyping out ideas or writing a one off utility to get an idea of the state of a solution. It can also be very useful for consultants wanting to run a custom set of tools against customer solutions.

To create a stand alone code analyzer, you can select the Stand-Alone Code Analysis Tool project template from the Extensibility sections of the new project window.

NewStandAloneProject

At this point, you just have a basic console application that is referencing the code analysis DLLs. To start analyzing a project, you need to load it up. To do this, we will create a MSBuildWorkSpace and load the solution file:

var ws = Microsoft.CodeAnalysis.MSBuild.MSBuildWorkspace.Create();
var soln = ws.OpenSolutionAsync(@"Z:\Dev\Temp\SimpleWinformsTestApp\SimpleWinformsTestApp.sln").Result;

Next we will get the first project from the solution and get the Compilation object, so that we can access things like the SyntaxTree.

var proj = soln.Projects.Single();
var compilation = proj.GetCompilationAsync().Result;

Now that we have the Compliation object we have a myriad of options available to us and we can start doing analysis. In this example, I want to find all classes that inherit from System.Windows.Forms.Form. So, as I described in the Working With Types in Your Analyzer post, we can get the type by its metadata name form the Compilation object:

string TEST_ATTRIBUTE_METADATA_NAME = "System.Windows.Forms.Form";
var testAttributeType = compilation.GetTypeByMetadataName(TEST_ATTRIBUTE_METADATA_NAME);

To get at the classes declared in the project, we need to loop over all of the SyntaxTrees in the Compilation and then find all of the ClassDeclarationSyntax nodes declared in those trees:

foreach (var tree in compilation.SyntaxTrees)
{
    var classes = tree.GetRoot().DescendantNodesAndSelf().Where(x => x.IsKind(SyntaxKind.ClassDeclaration));
    foreach (var c in classes)
    {
     // ...
    }
// ...
}

Once we have all of the classes, we need to determine if the class inherits from the testAttributeType we declared earlier in the analysis. The ClassDeclarationSyntax has a BaseList property which defines all of the base classes for the class. So from these base classes, we can get the type information (from the SemanticModel) and compare it to the testAttributeType to see if it is a Windows Form:

var classDec = (ClassDeclarationSyntax)c;
var bases = classDec.BaseList;

if (bases?.Types != null)
{
    foreach (var b in bases.Types)
    {
        var nodeType = compilation.GetSemanticModel(tree).GetTypeInfo(b.Type);
        // Is the node a System.Windows.Forms.Form?
        if (nodeType.Type.Equals(testAttributeType))
        {
            Console.WriteLine(classDec.Identifier.Text);
        }
    }
}

The full program is as follows:

using System;
using System.Linq;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp;
using Microsoft.CodeAnalysis.CSharp.Syntax;

namespace StandAloneCodeAnalysis
{
    class Program
    {
        static void Main(string[] args)
        {
            var ws = Microsoft.CodeAnalysis.MSBuild.MSBuildWorkspace.Create();
            var soln = ws.OpenSolutionAsync(@"Z:\Dev\Temp\SimpleWinformsTestApp\SimpleWinformsTestApp.sln").Result;
            var proj = soln.Projects.Single();
            var compilation = proj.GetCompilationAsync().Result;

            string TEST_ATTRIBUTE_METADATA_NAME = "System.Windows.Forms.Form";
            var testAttributeType = compilation.GetTypeByMetadataName(TEST_ATTRIBUTE_METADATA_NAME);

            foreach (var tree in compilation.SyntaxTrees)
            {
                var classes = tree.GetRoot().DescendantNodesAndSelf().Where(x => x.IsKind(SyntaxKind.ClassDeclaration));
                foreach (var c in classes)
                {
                    var classDec = (ClassDeclarationSyntax)c;
                    var bases = classDec.BaseList;

                    if (bases?.Types != null)
                    {
                        foreach (var b in bases.Types)
                        {
                            var nodeType = compilation.GetSemanticModel(tree).GetTypeInfo(b.Type);

                            // Is the node a System.Windows.Forms.Form?
                            if (nodeType.Type.Equals(testAttributeType))
                            {
                                Console.WriteLine(classDec.Identifier.Text);
                            }
                        }
                    }
                }
            }
            Console.ReadKey();
        }
    }
}

As you can see, creating an analyzer that works as a standalone project is not too different from creating an analyzer that runs inside of Visual Studio. There are some basic differences and you are required to setup a little more yourself, but the underlying logic does not change much.

Analyzing the Order of Method Calls

Recently there was a question on StackOverflow that piqued my interest. The question was basically asking how an analyzer could ensure that one method on an instance was called prior to calling another method on an instance.

An example of this would be if you wanted to ensure you called the HasValue method on a System.Nullable before you called the Value property. I provided an answer to the question, but let's dig a little deeper here.

Since we want to look at System.Nullable, we'll first need to get that type to check it against variable types when doing our analysis. To do this, we will register an action to run on compilation start, as I demonstrated in the Working with Types in Your Analyzer post. We'll also register a syntax node action to analyze all MethodDeclaration nodes.

context.RegisterCompilationStartAction((compilationStartContext) =>
{
    var nullableType = compilationStartContext.Compilation.GetTypeByMetadataName("System.Nullable`1");
    compilationStartContext.RegisterSyntaxNodeAction((analysisContext) =>
    {
        // ...
    }, SyntaxKind.MethodDeclaration);
});

Next, we will capture all MemberAccessExpressionSyntax nodes from the method, which will give us all nodes where an object is accessed (i.e. method calls, property calls, field accesses, ...).

var invocations =
    analysisContext.Node.DescendantNodes().OfType<MemberAccessExpressionSyntax>();

We'll also create a HashSet<string> to keep track of all the HasValue calls for a given variable.

var hasValueCalls = new HashSet<string>();

Now we can iterate over the invocations to keep track of the HasValue and Value calls. To start, we check if the Expression on the invocation is anIdentifierNameSyntax`:

foreach (var invocation in invocations)
{
    var e = invocation.Expression as IdentifierNameSyntax;

    if (e == null)
        continue;

    //...
}

Next we will use the Semantic Model to get the type information from the expression. To do this we simply, call the GetTypeInfo method on the SemanticModel:

var typeInfo = analysisContext.SemanticModel.GetTypeInfo(e).Type as INamedTypeSymbol;

We now have the type info, so we will check that it is a System.Nullable type. Since all base nullable types (i.e. int?, bool?, ...) are constructed from the System.Nullable type, we can use the nullableType we captured earlier to verify the type.

if (typeInfo?.ConstructedFrom == null)
    continue;

if (!typeInfo.ConstructedFrom.Equals(nullableType))
    continue;

At this point, we know that the variable is a System.Nullable, so we can now check if the invocation is a HasValue or a Value call. If it is a HasValue call, we will add that variable to our HashSet, so we know that HasValue has been called on this variable.

string variableName = e.Identifier.Text;

if (invocation.Name.ToString() == "HasValue")
{
    hasValueCalls.Add(variableName);
}

Finally, we can check the Value calls and see if there was a previous HasValue call for the variable name. If not, we can raise a diagnostic.

if (invocation.Name.ToString() == "Value")
{
    if (!hasValueCalls.Contains(variableName))
    {
        analysisContext.ReportDiagnostic(Diagnostic.Create(Rule, e.GetLocation()));
    }
}

So now, if we try some code against this analyzer, we will see the following code raise a diagnostic:

int? x = null;
var n = x.Value;  

While the following code will not raise a diagnostic:

int? x = null;
if (x.HasValue)
{
    var n = x.Value;
}

In the end, the full diagnostic is:

public override void Initialize(AnalysisContext context)
{
    context.RegisterCompilationStartAction((compilationStartContext) =>
    {
        var nullableType = compilationStartContext.Compilation.GetTypeByMetadataName("System.Nullable`1");
        compilationStartContext.RegisterSyntaxNodeAction((analysisContext) =>
        {
            var invocations =
                analysisContext.Node.DescendantNodes().OfType<MemberAccessExpressionSyntax>();
            var hasValueCalls = new HashSet<string>();
            foreach (var invocation in invocations)
            {
                var e = invocation.Expression as IdentifierNameSyntax;

                if (e == null)
                    continue;

                var typeInfo = analysisContext.SemanticModel.GetTypeInfo(e).Type as INamedTypeSymbol;

                if (typeInfo?.ConstructedFrom == null)
                    continue;

                if (!typeInfo.ConstructedFrom.Equals(nullableType))
                    continue;

                string variableName = e.Identifier.Text;

                if (invocation.Name.ToString() == "HasValue")
                {
                    hasValueCalls.Add(variableName);
                }

                if (invocation.Name.ToString() == "Value")
                {
                    if (!hasValueCalls.Contains(variableName))
                    {
                        analysisContext.ReportDiagnostic(Diagnostic.Create(Rule, e.GetLocation()));
                    }
                }
            }
        }, SyntaxKind.MethodDeclaration);
    });
}

Getting the order of method calls can be a bit tricky, but once you work out the logic and the syntax nodes that you need to process, the work to be done becomes clear. If you have a story of an analyzer you wrote that does something similar, share it in the comments below.

Working with If Blocks in Your Code Analyzers

Once you start writing your analyzers, you will eventually run into a scenario where you have to handle an if block. If blocks like any other blocks of code have a few nuances that you need to understand when working with them.

To start analyzing if blocks, you can register a SyntaxNodeAction for the SyntaxKind.IfStatement:

context.RegisterSyntaxNodeAction((syntaxNodeContext)=>
{
}, SyntaxKind.IfStatement); 

Now before we start analyzing the IfStatementSyntax, we should first look at the components of the syntax. Take for example the following code:

IfBlock

Breaking down this statement, each of the different colors maps to different properties on the IfStatementSyntax as demonstrated in the chart below.

Statement Explanation
if This is represented by the IfKeyword property on the IfStatementSyntax
(x==null) This is represented by the Condition property on the IfStatementSyntax
{ // Do stuff } This is represented by the Statement property on the IfStatementSyntax
else { // Do other stuff } This is represented by the Else property on the IfStatementSyntax

Now that you have a basic understanding of the different elements in the IfStatementSyntax, you can take the node passed into the method via the syntaxNodeContext to access the If statement. So, if for example we wanted to raise a diagnostic anytime someone checked != null, we could start by looking at the condition as a BinaryExpressionSyntax

var node = syntaxNodeContext.Node as IfStatementSyntax;
var binaryExpression = node.Condition as BinaryExpressionSyntax;

if (binaryExpression == null)
    return;

Next, the binaryExpression has a kind that we can check to make sure it is a NotEqualsException.

if (binaryExpression.IsKind(SyntaxKind.NotEqualsExpression))
{
}

The binary expression contains a left and right portion of the expression. Since we are looking to see if one side of the expression is null, we can simply cast the left and right portions of the expression to a LiteralExpressionSyntax and check to see if that is a NullLiteralExpression. If either side matches that criteria, then we can raise the diagnostic.

var left = binaryExpression.Left as LiteralExpressionSyntax;
var right = binaryExpression.Right as LiteralExpressionSyntax;

if (left != null && left.IsKind(SyntaxKind.NullLiteralExpression) 
    ||(right !=null && right.IsKind(SyntaxKind.NullLiteralExpression)))
{
    syntaxNodeContext.ReportDiagnostic(Diagnostic.Create(Rule, binaryExpression.GetLocation()));
}

So in the end, our full diagnostic looks like this:

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxNodeAction((syntaxNodeContext)=>
    {
        var node = syntaxNodeContext.Node as IfStatementSyntax;
        var binaryExpression = node.Condition as BinaryExpressionSyntax;

        if (binaryExpression == null)
            return;

        if (binaryExpression.IsKind(SyntaxKind.NotEqualsExpression))
        {
            var left = binaryExpression.Left as LiteralExpressionSyntax;
            var right = binaryExpression.Right as LiteralExpressionSyntax;

            if (left != null && left.IsKind(SyntaxKind.NullLiteralExpression) 
                ||(right !=null && right.IsKind(SyntaxKind.NullLiteralExpression)))
            {
                syntaxNodeContext.ReportDiagnostic(Diagnostic.Create(Rule, binaryExpression.GetLocation()));
            }
        }
    }, SyntaxKind.IfStatement);
}

As I demonstrated, the processing of if statements in your analyzers is not difficult and can be a very powerful construct to use in your analyzer. There is obviously much more that can be done with the processing of if statements and I would encourage you to take the time and learn more about them.

Analyzing XML Comments in your Roslyn Code Analyzer

In previous articles, we covered the basics of dealing with comments in code analyzers. You'll recall that comments and other white space are referred to as trivia by the Roslyn compiler. XML Comments are also considered trivia, however, they are a special case of trivia called structured trivia.

If, for example, you wanted to check to see that all public methods have some sort of XML comment, you could simply write an analyzer like this:

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxNodeAction(CheckMethods, SyntaxKind.MethodDeclaration);
}

private void CheckMethods(SyntaxNodeAnalysisContext syntaxNodeAnalysisContext)
{
    var node = syntaxNodeAnalysisContext.Node as MethodDeclarationSyntax;
    if (node.HasStructuredTrivia)
        return;

    if (node.Modifiers.Any(SyntaxKind.PublicKeyword))
        syntaxNodeAnalysisContext.ReportDiagnostic(Diagnostic.Create(Rule,node.GetLocation()));
}

All that we are checking in that code is that the method has no structured trivia and that it is a public method. At this point, the analyzer really isn't doing anything useful, other than ensuring that the method has a leading xml comment indicator (///). Next we could modify this to verify that it actually has a summary node in the XML. To do this, we need to get the structured XML and then find the XmlElementSytnax nodes that have a name of summary.

private void CheckMethods(SyntaxNodeAnalysisContext syntaxNodeAnalysisContext)
{
    var node = syntaxNodeAnalysisContext.Node as MethodDeclarationSyntax;

    if (!node.Modifiers.Any(SyntaxKind.PublicKeyword))
        return;

    var xmlTrivia = node.GetLeadingTrivia()
        .Select(i => i.GetStructure())
        .OfType<DocumentationCommentTriviaSyntax>()
        .FirstOrDefault();

    var hasSummary = xmlTrivia.ChildNodes()
        .OfType<XmlElementSyntax>()
        .Any(i => i.StartTag.Name.ToString().Equals("summary"));

    if (!hasSummary)
    {
        syntaxNodeAnalysisContext.ReportDiagnostic(
           Diagnostic.Create(Rule, node.Identifier.GetLocation(), "Missing Summary"));
    }
}

If we find the summary is missing, we raise a diagnostic at the Identifier location, such that only the method name gets underlined.

We could take this a bit further by checking the parameters. To start we need to get the XmlElementSyntax nodes whose name are param. From those nodes, we will get all attributes that are of the type XmlNameAttributeSyntax.

var allParamNameAttributes = xmlTrivia.ChildNodes()
    .OfType<XmlElementSyntax>()
    .Where(i => i.StartTag.Name.ToString().Equals("param"))
    .SelectMany(i => i.StartTag.Attributes.OfType<XmlNameAttributeSyntax>());

Once this query is evaluated, allParamNameAttributes contains a list of all name attributes within the XML comments. Using these nodes, we can compare the identifiers to the names of the parameters in the method declaration and raise any diagnostics where a parameter is not documented.

foreach (var param in node.ParameterList.Parameters)
{
    var existsInXmlTrivia = allParamNameAttributes
                .Any(i=>i.Identifier.ToString().Equals(param.Identifier.Text)) ;// ()

    if (!existsInXmlTrivia)
    {
        syntaxNodeAnalysisContext.ReportDiagnostic(
             Diagnostic.Create(Rule, param.GetLocation(), "Parameter Not Documented"));
    }
}

Putting it all together yields:

private void CheckMethods(SyntaxNodeAnalysisContext syntaxNodeAnalysisContext)
{
    var node = syntaxNodeAnalysisContext.Node as MethodDeclarationSyntax;

    if (!node.Modifiers.Any(SyntaxKind.PublicKeyword))
        return;

    var xmlTrivia = node.GetLeadingTrivia()
        .Select(i => i.GetStructure())
        .OfType<DocumentationCommentTriviaSyntax>()
        .FirstOrDefault();

    var hasSummary = xmlTrivia.ChildNodes()
        .OfType<XmlElementSyntax>()
        .Any(i => i.StartTag.Name.ToString().Equals("summary"));

    if (!hasSummary)
    {
        syntaxNodeAnalysisContext.ReportDiagnostic(
            Diagnostic.Create(Rule, node.Identifier.GetLocation(), "Missing Summary"));
    }

    var allParamNameAttributes = xmlTrivia.ChildNodes()
        .OfType<XmlElementSyntax>()
        .Where(i => i.StartTag.Name.ToString().Equals("param"))
        .SelectMany(i => i.StartTag.Attributes.OfType<XmlNameAttributeSyntax>());

    foreach (var param in node.ParameterList.Parameters)
    {
        var existsInXmlTrivia = allParamNameAttributes
                    .Any(i=>i.Identifier.ToString().Equals(param.Identifier.Text)) ;// ()

        if (!existsInXmlTrivia)
        {
            syntaxNodeAnalysisContext.ReportDiagnostic(
               Diagnostic.Create(Rule, param.GetLocation(), "Parameter Not Documented"));
        }
    }
}

As you can see, it is possible to get at the XML comments for any given node in the syntax tree, provided you know where to look. There is obviously a lot more work that could be put into this analyzer to ensure you are following all the rules. If you are looking for a complete analyzer, look no further than the StyleCopAnalyzers project on GitHub. They have a handful of documentation rules that cover the cases I outlined and many more. Note that some of the code in this article to get at the XML trivia was based on code that exists in the XmlCommentHelpers in the StyleCopyAnalyzers project.

View Archive (89 posts)