Testers are most likely familiar with the testing pyramid: unit and component tests, various levels of integration tests, and the rest.
Tools or applications, be they open or closed source, for customer or internal use, all have their own specific rules on how to use them correctly and optimally. It’s beneficial for teams to include automatic validations like static code analysis checks in these tools. Static code analysis checks allow a program to be tested without actually executing the code. This is not to be confused with syntax highlighting which is highlighting the keywords and elements a programming language offers.
Techniques like static code analysis help teams to ensure that the tools they build can be used as intended, especially when those tools will be used widely in the industry. Such checks can help uncover coding issues early in the development process, and can even help engineers learn how to operate those tools.
Static code analysis capability is especially important for developers of code editors and IDEs (Integrated Development Environments), as well as for IDE plugin developers. This is because those who create programming languages, test frameworks, and so forth must also implement static code analysis to make it available to the engineers who will use the tools.
However, it is not enough to implement those checks. It is also crucial to test them to ensure that they work properly, so that they don’t mislead or confuse users by falsely reporting correct code as incorrect (or vice versa).
A little background
Before I started to develop JetBrains IDE plugins, I worked as a test automation engineer. I developed unit tests for our test automation frameworks, Selenium-based system integration tests, and web UI tests.
When I was introduced to testing IDE plugins, at first it seemed straightforward to use unit testing and object mocking for certain IDE features. It was not an overly complex thing to do, but after a while I started to realize that it is overkill. When all you have is a hammer, everything looks like a nail.
I found out that there are more convenient ways of performing those tests. See each section below for more information on optimal ways to perform each test.
Different IDE platforms, different testing capabilities
In the next sections, I’ll show you how I tested code highlighting in the editors of three of the major IDEs for which I developed plugins. Each IDE offers different capabilities and approaches to testing its features, so to simplify things, I’ll move from a less than ideal solution, a unit testing approach, to a more fitting one, a sort of visual testing approach on the integration level.
Although my experience is limited in terms of Visual Studio and VS Code extension development (I develop plugins mostly for JetBrains IDEs), there may be better solutions than the ones in this article. However, they still demonstrate well what I outlined in the introduction.
Please note that the sections below contain code snippets from three different languages, namely C#, TypeScript, and Java, so a bit of technical knowledge is needed. I have tried to keep the snippets as simple and lean as possible.
With all that said, let me start with an example.
An example problem
Let's say we have a simple method with a single vararg parameter into which an even number of name-value pairs must be passed.
C# example:
void SetHeaders(params string[] nameValuePairs) { ... }
We want to report and underline the method name when an odd number of arguments are passed in (for example, a header name doesn’t have a value specified). A call to this method that should trigger reporting could be something like the following.
anObject.SetHeaders("header", "value", "another header");
Now, let’s see an example of what I implemented in a Visual Studio extension.
Visual Studio extensions
First, let me start by giving you a simplified summary of how Visual Studio extensions handle underlining different parts of a document.
Marking different ranges of an editor either with user-visible or hidden data is called tagging. When the corresponding tags are user-visible, they are displayed as so-called squiggles (wavy text underlinings), like this:
Squiggles are represented by the ErrorTag class and its concrete custom implementations, and they carry user-visible information, like the type of the error for formatting, and the tooltip (an error message, for example) to display.
ErrorTags
must be registered in an editor in order to have them displayed. This is done with so-called ITagger
instances, each tag type with its own ITagger, via its GetTags()
method:
public IEnumerable<ITagSpan<SquiggleTag>> GetTags(NormalizedSnapshotSpanCollection spans) { ... }
The ITagSpan
type contains the range where the tag is registered in an editor, so you know where it will be displayed.
When a modification happens in an editor, the extension platform calls this method with a span collection (ranges in the document) for which updated tagging information is requested. If it contains something like the range of a certain line, this method must return tagging data for that line. We'll use this method as the subject of our tests.
Testing the tagging data
We know that ITagger.GetTags(NormalizedSnapshotSpanCollection)
must be called to retrieve, and then be able to validate, the actual tag information. It’ll be something like:
IList<ITagSpan<IErrorTag>> squiggles = squiggleTagger.GetTags(spanCollection).ToList();
The span collection is the one with which we can control the document range whose tags we are testing. Since, in my experience, it is not necessarily easy to create a NormalizedSnapshotSpanCollection
with all its dependencies, I’m not including that part.
After we call getTags()
, we can go on and assert the returned squiggle data. We know that there must only be one tag for the aforementioned code snippet, on the method name, so let’s validate that first:
Assert.That(squiggles, Has.Count.EqualTo(1));
Then, validate each data in that collection: start position, the length of the squiggle, as well as the error type and the error message:
Assert.That(squiggles[0].Span.Start.Position, Is.EqualTo(120));
Assert.That(squiggles[0].Span.Length, Is.EqualTo(10));
Assert.That(squiggles[0].Tag.ErrorType, Is.EqualTo("Error"));
Assert.That(squiggles[0].Tag.ToolTipContent, Is.EqualTo("An even number of arguments must be passed in."));
The rest of the test would involve creating or mocking a SpanCollection
that is passed into GetTags()
, and other setup logic like emulating a document to have some content, and a kind of visual representation of the location of the tested tags.
Is this solution good?
You can see that this is essentially unit level testing, which in itself is not a problem. But, despite the fact that it achieves the goal of testing the tags, I see the following problems with it:
- It is too granular. Even though you could always hide the assertions in utility methods, the tests’ essence would remain the same.
- It is not really scalable when validating other custom tag data or many more tags.
- It doesn’t really communicate the intention well, because we can't see where in the document the tags would be displayed.
- Furthermore, when the content of a tested document changes for a new test, you may have to update existing tag positions in unrelated tests as well.
I have to point out that there are more advanced solutions than this regarding VS extension development, such as what the dotnet/roslyn project does. But that project seems to be leveraging some more technical components.
I think this first example demonstrates well how one might approach the problem at first, and helps lay the foundation for the examples in the following sections.
VS Code extensions
VS Code has a different concept and nomenclature for highlighting code (and for many other features) than Visual Studio. Side note: Language Server Protocol uses the same concepts, but discussing that is not in the scope of this article.
Highlighting text in VS Code is called diagnostics. It involves the following parts:
-
Diagnostic
: contains information about where an issue in a document is present, its severity, and some other information. -
DiagnosticCollection
: maps one or moreDiagnostics
to files in the workspace. -
ExtensionContext
: a central object in VS Code extensions. It provides “a collection of utilities private to an extension”, such as registration of diagnostics, code completion, and more.
The idea is that you register one or more DiagnosticCollections
in the ExtensionContext
, to which then you can add Diagnostic
objects and register them for certain files in the current workspace. This allows the collections to be displayed in the editors. Since VS Code extensions are written in TypeScript, the code snippets in this section are written in that language.
So, let’s say you have a function that analyzes a document and then updates the aforementioned diagnostics with zero or more issues found in that document:
export function analyseDocument(
doc: vscode.TextDocument,
diagnostics: vscode.DiagnosticCollection
) { ... }
This function must be registered in VS Code’s event handlers (on document change, for example), so that it is invoked when changes are made in a document. Once it is wired in, testing this function can begin.
The naive approach would be invoking it with a manually created TextDocument
and DiagnosticCollection
, and validating if the DiagnosticCollection
contains the correct Diagnostic
objects. However, there is a more straightforward way.
Testing in a VS Code instance
There is an npm (Node.js Package Manager) package called @vscode/test-electron with which (among other things) you can pre-configure a workspace with files and folders, open an actual VS Code instance from a test with that workspace, and use the documents in that instance for testing.
Knowing that the test automatically opens VS Code with a workspace in it (thanks to the test-electron
configuration), the first step is to open a document from the configured workspace. Alternatively, you can create new documents and populate them with content on the fly, if that is what’s necessary for your tests.
const fileUri = await getFileFromWorkspace("IncompleteVarargs.cs");
const textDocument = await vscode.workspace.openTextDocument(fileUri);
await vscode.window.showTextDocument(textDocument);
Here, getFileFromWorkspace()
is a utility method that returns a file URI for the given file name (IncompleteVarargs.cs
) from the workspace. IncompleteVarargs.cs
contains an anObject.setHeaders()
method call with an odd number of arguments that we can use for testing.
As for opening a document, the difference between openTextDocument()
and showTextDocument()
is that the former doesn’t actually show the new editor tab, only the latter one does, but you need both.
Then, you can retrieve all diagnostics from the workspace via VS Code’s API, and select the diagnostics for the tested file.
//An array of Uri and Diagnostic[] pairs
const diagnostics = vscode.languages.getDiagnostics();
//A single pair of Uri and Diagnostic[]
const uriAndDiagnostics = diagnostics[0];
//Of type Diagnostic[]
const diagnosticsForFile = diagnostics[0][1];
//A single Diagnostic object
const diagnostic = diagnostics[0][1][1];
//Validating each property separately
assert.strictEqual(diagnostic.severity, DiagnosticSeverity.Error);
assert.strictEqual(diagnostic.message, "An even number of arguments must be passed in.");
assert.strictEqual(diagnostic.range.start.line, 5);
assert.strictEqual(diagnostic.range.start.character, 10);
assert.strictEqual(diagnostic.range.end.line, 5);
assert.strictEqual(diagnostic.range.end.character, 20);
//Or via one object in a more convenient way
assert.deepStrictEqual(diagnostic, {
severity: DiagnosticSeverity.Error,
message: "An even number of arguments must be passed in.",
range: {
start: { line: 5, character: 10 },
end: { line: 5, character: 12 }
}
});
Is this solution good?
Now, this is very similar to the Visual Studio solution. But it may be more complex (or complicated) in a different way in terms of querying the diagnostics via many-array indexing. However, if you use deep object equality checks, the complexity and readability of the test can be improved greatly.
Overall, I could say almost the same things about this approach as I did for the Visual Studio example. But it is more convenient in the aspect that you can properly configure an actual VS Code workspace and perform your testing within that workspace. And you have access to other VS Code functionality as well. It makes it possible to test your extension’s functionality from an end user’s perspective.
Is there an alternative?
During my research for this article, I came across the stylelint/vscode-stylelint project on GitHub which employs the Jest library’s snapshot testing capabilities. This, to be honest, is a nice segue to the next IDE’s capabilities. To quote from Jest’s website:
“A typical snapshot test case renders a UI component, takes a snapshot, then compares it to a reference snapshot file stored alongside the test. The test will fail if the two snapshots do not match: either the change is unexpected, or the reference snapshot needs to be updated to the new version of the UI component.”
The test code retrieves the Diagnostic objects from a document, then performs a snapshot validation.
const diagnostics = await waitForDiagnostics(document);
expect(diagnostics.map(normalizeDiagnostic)).toMatchSnapshot();
The validation is performed against a snapshot file that stores the expected Diagnostic
object property values like this (exact example from the mentioned snapshot file):
Object {
"code": "plugin/foo-bar",
"message": "Bar (plugin/foo-bar)",
"range": Object {
"end": Object {
"character": 6,
"line": 0,
},
"start": Object {
"character": 5,
"line": 0,
},
},
"severity": 1,
"source": "Stylelint",
}
I like this solution much more because
- the test code and test data are separated
- the test code is much more concise
- it's easier to understand what is highlighted and how
However, this approach still doesn’t allow you to see exactly where in the file the diagnostics would be applied. For that, we’ll move on to yet another IDE and explore a different kind of snapshot testing.
The IntelliJ platform
IntelliJ has its own unique concepts when it comes to syntax and error highlighting. One of them is called Inspections, which is designed for implementing static code analysis. I will demonstrate this feature in the next sections.
The Program Structure Interface (PSI)
Inspections and many other platform features use the so-called PSI (Program Structure Interface) to work with code elements in the editor.
From the IntelliJ Platform Plugin SDK documentation:
"A PSI (Program Structure Interface) file is the root of a structure representing a file's contents as a hierarchy of elements in a particular programming language."
You can imagine it as an abstraction of an AST (Abstract Syntax Tree) that is generated for each file based on the corresponding language’s grammar definitions. It is similar to working with the Reflections API in the Java language, and you can query the classes, methods, constructors, and more.For example, the following Java code snippet
public class PSIDemo {
public void aMethod() {
}
}
would produce the following PSI tree:
Inspections
To implement an inspection you will need to implement a class that extends the LocalInspectionsTool
base class and one of its buildVisitor()
methods:
- The returned visitor class corresponds to using the Visitor design pattern, and lets you visit and analyze different types of nodes in files’ PSI trees.
- The related ProblemsHolder type is one via which you can register PSI elements to highlight, with an error message, optional formatting and optional quick fixes.
class IncompleteVarargsInspection extends LocalInspectionTool {
@Override
public PsiElementVisitor buildVisitor(ProblemsHolder holder, boolean isOnTheFly) { ... }
}
As for testing, we'll target the aforementioned buildVisitor()
method and this inspection class overall.
Testing in an in-memory editor
Although the IntelliJ Platform has support for UI testing of IDE features via its own IntelliJ UI Test Robot, you don't test inspections in an actual IDE instance or even in a real editor. But that is not a requirement for comprehensive testing of inspections.
Also, in the case of inspections and other platform features that have visual representations in the editor, test results are embedded into the tested source files using an XML-like markup, like this:
<warning descr="expected warning message">the code to be highlighted</warning>
If we were to test our original example, the test implementation would look something like this:
@Test
void shouldReportIncompleteParams() {
getFixture().configureByText("IncompleteVarargs.java", """
class IncompleteVarargs {
void aMethod() {
Request request = new Request();
request.<error descr=”An even number of arguments must be passed in.”>setHeaders</error>(“header”, “value”, “another header”);
}
}""");
getFixture().enableInspections(new IncompleteVarargsInspection());
getFixture().testHighlighting(true, false, true);
}
The test performs the following steps:
- Before executing the test method, it initializes an empty project based on the IntelliJ Platform base class the test class extends. (It is omitted from the example for brevity.)
-
configureByText() loads the Java file called IncompleteVarargs.java
with the provided content into the project, and creates an in-memory editor with that content.- By loading a file into the editor using the filename + text combination instead of specifying a path to an existing test data file, you don’t necessarily have to extract the test data into separate files, and that may help ease maintenance.
I personally use it in a mixed way: I usually load short file contents via filename + text, while I extract content into files when they are longer, or for example, when the feature I’m testing requires the file to be in a certain folder structure.
- By loading a file into the editor using the filename + text combination instead of specifying a path to an existing test data file, you don’t necessarily have to extract the test data into separate files, and that may help ease maintenance.
-
enableInspections()
configures the inspection implementation to test highlighting for. -
testHighlighting()
executes our inspection and compares the highlighting results with the content we specified inconfigureByText()
. The arguments are used to configure what severity levels the test will validate against, and the corresponding markup tags to include in the test data file. You can interpret it as
testHighlighting(/*checkWarnings*/ true, /*checkInfos*/ false, /*checkWeakWarnings*/ true);
Is this solution good?
I think, among the three IDE’s solutions, this one, as integration-level testing, is the most suitable for testing highlighting, and I personally like this one the most, for the following reasons:
- Configuring the test is quite straightforward in this case and in many other cases as well.
- There is less test code and less test data to work with, and you don’t necessarily have to implement specific utilities, since many are provided by the IntelliJ Platform. This makes test implementation, maintenance, and comprehension easier.
- The highlighting information is not separated from the test data; the XML markup is embedded into the test files. So it is easier to comprehend what the highlighting would look like in an actual editor, and where the highlighting would be applied.
The choice is yours
I hope you found some interesting nuggets of information in this article regardless of your level of experience with testing IDE features.
There may be newer, more convenient solutions for the technologies and techniques mentioned in this article. Regardless of the capabilities of each IDE or related utilities, these examples demonstrate well that certain test levels (integration / end-to-end in this particular case) are better suited for testing certain solutions than unit tests. So, choose wisely.
For more information
-
Visual Studio Extensibility Documentation
- Michael's Coding Spot: Highlighting code in Editor
- Stack Overflow: VSIX: IErrorTag tooltip content not displaying
- VS Code Extension API Documentation
- IntelliJ Platform Plugin SDK Documentation
- The Hidden Treasure Of Static Analysis: Finding Risks In Forgotten Places, Hilary Weaver
- Three Ways To Measure Unit Testing Effectiveness, Eduardo Fischer dos Santos
Learn more with MoT
- Supercharging Your Test Automation Code With AI Assistance In Your IDE by Valentin Agapitov
- What testing tools are in your current tech stack? By Jesper Ottosen
- An Inside Job: Customising Static Code Analysis for Optimising Internal Tools by Tàmas Balog
- The Hidden Treasure Of Static Analysis: Finding Risks In Forgotten Places by Hilary Weaver