Building command-line interface design
Every project, regardless of where it is in the software lifecycle, has mundane and error-prone tasks. These tasks usually consist of multiple smaller commands, such as starting/stopping an application, build, database maintenance, or source control.
Usually, devs document tasks in one place, so that team members don't need to jump around searching various documentation. Often, developers memorize parts of the documentation and reference tasks from memory. However, if something changes, people working off memory risk missing steps, parameters, or other aspects. The knowledge gap leads to unnecessary troubleshooting, which wastes and causes frustration.
When tasks are performed frequently and require predefined user input, such as parameters that could be calculated or sequence each time, it might be a good idea to automate them. Automation resolves pesky mistakes that people could make, be it syntax errors or missed steps. Furthermore, automation also serves as "living" technical documentation for developers to read scripts and understand what is performed.
Automating these types of tasks isn't complex or hard. Sticking to some guidelines and conventions helps the team to quickly create commands to streamline time-consuming tasks, giving them more time to solve problems instead of clicking buttons. A good way to automate tasks is to implement a custom command-line interface to enable the developer to tear down and build up the whole project infrastructure in a few keystrokes. This article will cover a small use case where building with command-line interface (CLI) saved time and proved beneficial. I'll also provide samples and guidelines for creating CLI.
The project
Our team inherited a project midway through development from another vendor that included:
plenty of scripts already written to use for development,
a microservice architecture with over ten microservices running .NET 4.7.1,
a microservice database solution using the same database for all services,
and over ten git repositories needing constant management.
The pain points
The development was significantly slow because of:
Context switching: Each service had individual migrations and git repositories, which made switching to a different task and helping teammates slow. Each task required developers to switch a large number of git repositories to a different branch, rebuild the projects, and start them again.
Keeping the database up to date: Adding or editing a table broke the application. Addressing fixes took time, whether a few clicks or deeper thinking, to identify changes and apply migrations for a specific microservice.
Starting/restarting the microservices: If you have ever tried running more than multiple instances of a visual studio at the same time, you know what it does to your computer. Stopping/starting/rebuilding the app with the visual studio was not an option.
Remembering the parameters each script needed to run: Performing a database operation or task was a hassle. Developers needed to open the script and review what parameters it expected, and supplied them.
Ensuring the services were running: Sometimes a microservice didn't start. Uncovering the issue causing the break took time, or, worse, wasted time debugging something that was not broken.
After identifying key pain points, it was easy to see exactly how the CLI could help with:
Start/stop/restart the microservices
Ease of branch switching
Ease of script running
Checking that microservices run properly
The impact
Building most of the functionality took a couple of days. After introducing the CLI, most of the team found it convenient to use and sped up development. Team member feedback helped to further improve features. By taking advantage of scripts already existing in the project, maintenance reduced significantly, and time savings increased substantially.
The tools
As a result of this success, I began exploring the options of the framework I used most: dotnet core. Since parsing the command by hand would be cumbersome, error-prone, and slow to write, I opted to investigate if there were any libraries that handled argument parsing. After some searching, I found that the CommandLineUtils library offered the set of features I was looking for, including:
dependency injection support,
an easy-to-use API for building CLI documentation,
an intuitive API for building commands using attributes or a builder function,
interactive console prompts,
and integration with the .NET's generic host.
After choosing this library, it was time to revisit the key pain points in project development.
The code
I set up a git repository with some example code, noted below. I will not cover every command implementation, which would make this article too long and very boring.
Configuration
Start by wiring up the app with commands, dependency injection, and some basic error handling.
public static async Task<int> Main(string[] args)
{
var services = new ServiceCollection()
.AddSingleton<DemoConfigurationProvider>()
.BuildServiceProvider();
var app = new CommandLineApplication<Demo>();
app.Conventions
.UseDefaultConventions()
.UseConstructorInjection(services);
try
{
return app.Execute(args);
}
catch (CommandParsingException ex)
{
Console.WriteLine(ex.Message);
if (ex is UnrecognizedCommandParsingException uex &&
uex.NearestMatches.Any())
{
Console.WriteLine();
Console.WriteLine("Did you mean this?");
Console.WriteLine(" " + uex.NearestMatches.First());
}
return 1;
}
catch (Exception e)
{
Console.WriteLine("An exception occurred");
Console.WriteLine($"Exception message: {e.Message}");
return 1;
}
}
In order to store some information so that the CLI does not require user input every time it runs, use a simple json file and store it in the user's home directory. Luckily, user home directories are stored in an environment variable by default (shown below).
public class DemoConfigurationProvider
{
private readonly string ConfigurationPath =
Environment.GetEnvironmentVariable("USERPROFILE") + $"{Path.DirectorySeparatorChar}demoOptions.json";
public DemoConfiguration GetConfiguration()
{
if (File.Exists(ConfigurationPath))
{
var json = File
.ReadAllText(ConfigurationPath);
var config = JsonConvert
.DeserializeObject(json);
return config;
}
return ConstructOptions();
}
The ConstructOptions method handles the first-time setup. If the user has not run the application yet, it asks for the project directory, validates it, and saves it into the configuration json file.
Commands
Create a base command class to re-use some of the basic functionality in more specific commands, like this:
[HelpOption("-?|-h|--help")]
public abstract class DemoCommandBase
{
public abstract List<string> CreateArgs();
protected DemoConfiguration Configuration;
public DemoCommandBase(DemoConfigurationProvider configurationProvider)
{
this.Configuration = configurationProvider.GetConfiguration();
}
protected virtual int OnExecute(CommandLineApplication app)
{
var args= CreateArgs();
args.Add($"cd {Directory.GetCurrentDirectory()}");
using (var proc = new Process())
{
var startInfo = new ProcessStartInfo();
startInfo.FileName = "cmd.exe";
startInfo.UseShellExecute = false;
startInfo.RedirectStandardInput = true;
proc.StartInfo = startInfo;
proc.Start();
using (var buf = proc.StandardInput)
{
foreach (var cmd in args)
{
buf.DoCommand(cmd);
}
}
if (proc.HasExited)
{
proc.Close();
}
}
return 0;
}
}
Specify the HelpOption attribute, which allows access for the library generated documentation and fetches the configuration for the commands to use. Also, there is a OnExecute function that gets the list of shell commands created in subcommands, spawns a process, changes its working directory to the root project directory, and runs the script/shell commands. It works great for running simple scripts or shell commands that need to be run from a specific directory. However, the drawback is that it is hard to handle errors should any arise.
Then, create a base command that acts as the trunk, which lets users build the command tree.
[Command("demo")]
[Subcommand(
typeof(ExampleSubcommand)
)]
public class Demo : DemoCommandBase
{
public Demo(DemoConfigurationProvider configurationProvider) : base(configurationProvider)
{
}
public override List<string> CreateArgs()
{
// you can implement some base functionality like validation here
return new List<string>();
}
}
In the CreateArgs function, you can implement some common functionality for all subcommands to use if necessary. Lastly, define the subcommand.
The CreateArgs function in this class is where you execute services, form commands, or do any other work. At this point, if you run the program with the parameter -h, you will get helpful documentation that tells you how to use the app. If you want information on how to use a command, you can use example -h, which gives you the following text:
Usage: demo example [options] <Command>
Arguments:Command Example argument
Options:
-?|-h|--help Show help information
-opt Example option
Set the base project directory (specify using \ )
[Command(Name = "example", Description = "Example command, used for demonstration")]
public class ExampleSubcommand : DemoCommandBase
{
[Argument(0, Description = "Example argument")]
public string Command { get; set; }
[Option("-opt", Description = "Example option")]
public string MigrationName { get; set; }
private Demo Parent { get; set; }
public ExampleSubcommand(DemoConfigurationProvider configurationProvider) : base(configurationProvider)
{
}
public override List<string> CreateArgs()
{
var args = Parent.CreateArgs();
// fill the arguments with commands to execute, note that for now the base functionality only works for a single command
// optionally you can ignore the args and just do any operation you would like instead, later just return the empty list
return args;
}
}
Setup
One of the CLI's benefits is that it can run from any location. You can add the path to the .exe, windows environment variables, and the PATH to achieve this.
Then, every time there are new features in the CLI, all that is required to get them is a git pull and a dotnet build.
8 time-saving tips for building a CLI
As with all software, developers want the CLI to be fast and effortless to use. The following are some tips I picked up to maximize time savings.
1. Prioritize convention over abstraction.
The purpose of building a CLI is to facilitate development flow, meaning that creating commands which need a lot of parameters to be passed becomes tedious in a different way. A good rule of thumb is having no more than two arguments and no more than three options with agreed-upon default values. If this cannot be done, and more parameters are needed, try talking with your team and agree on some conventions.
2. Configure, save data, and set aside.
A lot of customization can be done using a configuration file. Ask the user for some values and save them in a file. Re-use saved data whenever needed.
3. Keep the setup simple.
It is not a great idea to expect the user to know the exact setup steps for the CLI. Having a configuration template is a good idea. However, an even better idea is asking the user to supply the configuration on first use. In my example code, I check if the user is starting the application for the first time by verifying the existence of a configuration file. If there isn't one, I ask to supply the project path, create the file, and store the supplied value. As a result, there is no need to ask again.
4. Keep the commands short and clear.
Even the smallest time savings matters when doing repetitive tasks. Abbreviate and shorten commands to minimize the keystrokes needed to get the job done. However, try to make the command name still reflect its purpose.
5. Supply built-in documentation.
Not everyone is willing to take the time to memorize the commands to perform tasks, nor should they. Invoking the CLI with an -h or –help option should provide the list of available commands explaining what they do, further, invoking a command with the help option should list all the arguments, options, and how to use them.
6. Use what you have.
When writing a CLI for an ongoing project, it is important to note that there is no need to re-implement existing scripts. The main benefit is that you can build the tool fast. Invoking an existing script from a CLI with some configured arguments is a great idea. In my case, each microservice has a folder that contained PowerShell scripts that update the database. Invoking these scripts makes the command very fast to implement.
7. Make it easy to invoke.
It is a great idea to add your app to the system path, or if you are using a tool, like Cmder, to create an alias to invoke CLI with it from anywhere in the system or add the executable to the PATH.
8. Take advantage of framework features.
It is tremendously efficient to run start/build operations asynchronously in .NET tasks. If you have ten microservices to build and run, it can cut down startup time significantly. However, it can put a heavy load on your computer.
Save time with CLI
Building a CLI is easy, fast, and helpful making development faster with more tools available in a project. It is extremely beneficial when developing a project with many microservices. By contrast, the benefits may lessen when applied to monolithic applications or applications that do not require a more complex setup. Ultimately, every project is different, requires different types and amounts of tooling to maintain. If you find that running scripts or tools chips away at the time spent actually solving problems and delivering value, an orchestrator CLI app might be the solution.