Hello, world!

Welcome to Integration Junction, the Salesforce-oriented developer blog by Vernon Keenan and the team at Taxnexus.

I am the proud owner and operator of a communication service provider in Berkeley called  Telnexus. We use the 2600hz communications cloud.

We are finishing up a giant telecom billing project. This code has spun out of the telecom into Taxnexus, and I’ve taken on the role of Tech Lead for all Telnexus and Taxnexus system development work.

Coding is fun!

We need excellent developers at Taxnexus, and we are working on stuff more fun than tax algorithms! By sharing my coding secrets with you, gentle reader, I hope to engage with just the right person to join our team! If you think that person is you, send us an email to jobs@taxnexus.net.

If you do send us an email, be sure to prove you have checked out our website, as well as delving into this blog.

What will Integration Junction become? That’s the real question here! Probably some tips and tricks for the Salesforce developer community, and maybe some observations on the state of developer-type things.

Here’s a list of possible topics to come:

  • Object-Relational Modeling and Salesforce: Is there such a thing?
  • Killing off Conga — It Can Be Done!
  • PHP and Go as Salesforce App Server environments — Pros and Cons

Let’s hope these posts will be useful. And, here we go! 

I had a Go, CORS and Single Page App Ordeal So You Don’t Have To

Microservices are awesome, until you need to serve them up in a Single Page Application (SPA). That’s when you need to tame the CORS (cross-origin resource sharing) beast so your fancy new microservices can actually be used by front-end developers.

Don’t Have a CORS Ordeal

I just spent days on this one, so I thought I’d relay how my final solution works. In my case, I have a swagger-first approach to API development, where I feed a YAML file into go-swagger and then go-swagger generates a REST server framework. This works great with lots of microservices that don’t have any problem communicating using HTTP.

The problem we encountered was using Angular to make a single-page web application that would then use the APIs directly in the browser. That’s when CORS comes into play. I couldn’t get the headers to work right in Go using the go-swagger framework.

young troubled woman using laptop at home
Photo by Andrea Piacquadio on Pexels.com

Making CORS Work!

I have a swagger-first, also known as an OpenAPI standards, approach to API development. This is where I feed a YAML file into and then go-swagger generates a REST server framework. This works great with lots of microservices that don’t have any problem communicating using HTTP.

I could access my microservices externally on a server-to-server basis using api-umbrella as a API gateway. I even think I may have been able to solve my CORS problem with api-umbrella configuration, but I didn’t see it.

I had this problem with Angular, but anyone can have a CORS ordeal when you connect any new REST service to a JavaScript SPA.

My Toolchain

We all get attached to the chain of events that goes from a source code change to finished deployments. I needed my solution to work with a simple API gateway, like api-umbrella. Fancy solutions like AWS API Gateway don’t work for me due to lock-in and cost issues.

In a nutshell, here are the steps I use to build a microservice:

  1. Hand-write the swagger (OpenAPI v2) file in YAML
  2. Use go-swagger to generate the REST server
  3. Update the code to implement handler functions
  4. Test in VS Code
  5. Use Drone CI/CD to generate Docker images in a private registry
  6. Use docker-compose to orchestrate service startup on a private host, in a private VPC, and in a private datacenter
  7. Deploy api-umbrella as a the API gateway between public URI’s and the backend host

The Journey To Victory

After quite a journey, I finally came upon the solution at the swagger docs. The real trick in taming CORS, though, is to handle the “pre-flight” checks a web client makes before the actual REST method is invoked.

The solution for us is to create an OPTIONS method and write the response methods for them in Go using the go-swagger framework.

man in red crew neck sweatshirt photography
Photo by Andrea Piacquadio on Pexels.com

Make a swagger YAML file

This sample OpenAPI (swagger) YAML file is a very simple representation of a single endpoint REST server with two methods.

To get acquainted with this swagger definition, please note that I’ve used references rather heavily throughout the file. For example, if you look at the GET /coordinate path definition, you’ll see a reference to #/responses/CoordinateResponse.

Look in the responses section of the file, and you’ll see where CoordinateResponse includes both a JSON response body and header definitions.

Another handy definition in the responses section of the swagger file is CORSResponse, which is used to define the OPTIONS /coordinate response.

In my go-swagger environment, this YAML file generates the core HTTP services and frameworks for servicing inbound requests.

Notice how GET /coordinates has an authentication specification and OPTIONS /coordinates has no authentication. I need this because when Angular (or any SPA) causes the web browser to make an outbound call to a CORS-compliant REST server that is not in current origin, the browser expects to receive CORS OPTIONS response without any authentication.

By the way, you might be able to get around a CORS issue as an API consumer by including a local API proxy that doesn’t need to use CORS when make a server-to-server HTTP call. But, this technique requires that the API consumer include a proxy in their front-end code.

I also used a separate “cors” tag which helped organize and separate my “preflight” CORS options.

Next, I needed to write a preflight CORS handler function in the Go server. That is done by creating Go functions that conform to the go-swagger function conventions, draw from the OpenAPI (swagger) file.

Add CORS Header to Secure Response

Almost done. Next I need to modify my GetCoordinate handler to add the WithAccessControlAllowHeaders modifier.

Thank You!

cup of aromatic cappuccino with thank you words on foam
Photo by wewe yang on Pexels.com

I hope you liked my “little” story on how I solved my CORS problem.

I’m sure I didn’t get it right, so let me know how I could have done it better!

Make a Docker Host Fast and Easy with VMware ESXi and Photon OS 3

If you’re a up-and-coming tech startup like Taxnexus, you can’t afford to spend all your money on AWS doing devops.

Are you dumping into the AWS Money Pit?

The next time you get stuck with a $500 AWS surprise because someone was really trying to make things work better, think about building a devops playground on-prem or at a local colocation facility.

Move some of your Docker workload over to a bare-metal setup using VMware ESXi, the oldest free, commercial hypervisor. Just imagine all the cheap cores at your disposal with a new AMD Ryzen-based server! And, by using Photon OS as an ESXi-optimized host OS you get the best performance and super-simple, built-in Docker support.

Let’s get started!

Install VMware ESXi and Photon OS

Hit your new VMware ESXi host on HTTP to access the management tools
  1. Set up your server hardware with as many cores, memory and fast storage as you can afford. Check this article for more on free ESXi limitations.
  2. Set up ESXi on the local console.
  3. Install your new server in a private network available to your workstations, and then access the management web page to access the VMware Host Client.
  4. Download the Photon OS 3 ISO from the VMware Github repo. These instructions are for the ISO version only; do not use the OVA version.
  5. Upload your ISO to a folder in your VMware datastore.
  6. Create a new VMware virtual machine from the ISO.
  7. Install Photon OS 3 as your first Docker host. Be sure to name your new server!

Now we get to the tricky stuff that kind of makes Photon a pain because is comes up secure and lacking in network nicetities. I use Photon as a single root user, so that requires some additional setup to have a remote SSH work properly.

  1. Set up static IP
  2. Allow external hosts to ping
  3. Enable remote root login
  4. Start and Enable Docker

Set Up Static IP

Access the virtual console in the VMware Host Client and log into your new VM using the root password specified during setup.

To change the IP address from DHCP to static…

# Edit network config file
vi /etc/systemd/network/99-dhcp-en.network

For a host with IP, DNS and gateway at, and in a “mydomain.local” DNS zone change the file to this:



Make sure you have the security right, restart networking and check if you have the new IP active.

# set up security, restart networking and show interfaces
chmod 644 /etc/systemd/network/99-dhcp-en.network
systemctl restart systemd-networkd

Set Up External Ping

If you’re like me, then you like to know when your servers are up by having them send back a reply to an ICMP Echo request. Here are the steps for that:

# change and save iptables
iptables -A OUTPUT -p icmp -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables-save >/etc/systemd/scripts/ip4save

Enable Remote Root Login

The ssh daemon does not allow for remote root login by default. If you are OK with not creating special system users, then you need to enable root login by changing “PermitRootLogin no” to “PermitRootLogin yes” in the daemon config file.

# edit ssh daemon config
vi /etc/ssh/sshd_config

# search for "PermitRootLogin no"
# located at line 125
# change it to this
PermitRootLogin yes

# restart sshd
systemctl restart sshd

Start and Enable Docker

The real glory of this procedure is that Docker comes pre-installed in Photon OS, so you avoid all that mess.

# update to latest docker version
yum update -y
# start docker for the first time
systemctl start docker
# enable docker to start automatically
systemctl enable docker
# check that it is working
docker info
docker run hello-world

That’s All Folks!

Remember you only get 8 cores per VM in the free version of ESXi, so spread out your workload across multiple VMs to get started.

My next project on Photon is to try out their Kubernetes installation, which is supposedly a one-liner. Let me know if you get that going!

Dear Go — Thank You For Teaching Me PHP Was A Waste of My Time

I am as old as the hills and I started programming computers with assembly language in the 70’s. I’ve been looking for something to kick-start my cloud projects. Now I feel rejuvenated with some easy successes in Go!

This “letter in a bottle” will hopefully be found by other poor slobs like me who are even thinking about using PHP for their cloud project.

I programmed the PDP8 with flip switches in a computer lab at Northwestern University in 1977

To assert my tech cred, I could bore everyone with the story about flipping switches on a PDP8 in the musky subbasement of a famous Midwestern university, or I can get to talking about what’s actually happening today for server programmers with just a dash of history.

I’ve been using and learning about nearly every computer language and database technology since the late 1970’s. The last time I was serious about PHP as an app server platform was way back before PHP 5. We didn’t have classes and the other Object Oriented Programming capabilities that has given PHP more life lately.

Three years ago, when I needed an app server for my Salesforce project, I turned to PHP out of necessity. It was the only server platform I could deploy at the scale I needed. Given my time constraints, I probably could have done a better job configuring my servers, picking platforms, etc. So maybe my comments are colored a bit by my lack of expert-level PHP knowledge.

I definitely had fun doing the project. I was delighted to see that PHP 7 had evolved from a kind of ad hoc scripting language to a real developer’s tool that be used to create large codebases that really do something. WordPress uses PHP, right?

But, today I regret that decision to go with PHP. I am sure I would have been scared off by Go’s newness in 2015, so I don’t know what else I should have picked at the time I needed it.

It is weird how Go reminds me of FORTRAN

Now, I’m not saying that PHP is truly crap for cloud applications…actually I’m lying. It is total crap for cloud apps, and if you’re thinking about using it you should dump it today.

Five years from now PHP will be degraded to COBOL status. It will be a language that will employ hordes of mediocre programmers who are managing mediocre, boring code. I’ve seen enough patterns in the IT industry to be confident advising system architects that picking Go over PHP will get you promoted.

Go Will Own Serverless

The term serverless is in an over-hyped state right now, so it means anything to whoever is selling the concept at any moment.

To me, serverless means code in the cloud that executes a single function that manages resources in my REST API.

While the definition of serverless is up for debate, one thing people can agree upon is that Go is now challenging Node is the most effective language to use in a serverless endpoint.

Go, aka Golang, is a new language from Google that is designed for today’s cloud apps, and it is where all the good cloud and server programmers are headed. Deploying serverless with Go can be done with just a handful of source code files. 

AWS Lambda is the most popular implementation of Serverless with Go

Here is the curious fact that made me pay attention to Go — Amazon adopted Go as a language for their AWS Lambda serverless platform before Google could release a similar capability in GCP. That must have stung the GCP team!

AWS is obviously responding to customer demand by being the first to support Go in new, lower-cost service offerings like Lambda. 
Azure can’t be far behind.

Why Will Go Win?

Go wins because it strips away all the crap that has been building up in languages ever since we left FORTRAN and COBOL behind in the late 1980’s.

Go revives some long-lost concepts in programming, like minimalist code, pointers and a direct approach to data streams. I can see the Go team’s point about how object-oriented programming (OOP) has wrapped things up in layers of complexity and abstraction that we may not need for every situation.

The new Go logo

It all got started in the 1990’s when Java became the defacto enterprise standard for inter-process communication.

The sheer verbosity of Java is a problem, and the number of class files for a project can skyrocket. Java clearly has not kept up. PHP 7 is now noted for being more Java-like with its extensive support of classes and methods. I have got to wonder if that is a good thing!

Rob Pike, co-inventor of Go, likes to note in paisley smoking jackets that all languages are becoming more like each other. Go, on the other hand, was designed by computer science gods to go really fast on servers and do tons of stuff in the background that you really need.

With a spike in the end zone, Go Cloud is the latest indication of Go’s pending dominance. This project introduces a normalization layer where you can marshal resources from any public cloud with a single Go app.

So Long, PHP

For me, it’s bittersweet. While I was deploying my PHP server, I was also leveling-up on my Salesforce Apex skills. Apex is a close cousin of Java. I’ve had many exciting discoveries as a PHP programmer. I loved weaving my new Java knowledge with the OOP capabilities of PHP.

I now realize that OOP not suitable for cloud apps. Java sold PHP down the OOP river.

It is time to forget that advice of staying with what you know so you don’t waste time learning a new tool. If you are still using PHP for your server apps then Go is that good and worth the risk.

While OOP made poor, type-less PHP behave more predictably, OOP has hopelessly bloated PHP to the point were it is now a bad decision to maintain using PHP for your server apps. The momentum and features of Go make it the winner for the next few years, at least. It is time to switch.

Soon, people will start thinking this is a true expression: COBOL == PHP.

Loud and Clear: iPad Pro 24-bit High Def Audio

Are you the late night coder who demands the highest fidelity in your headphones, especially when you need to rock out at ear-splitting decibels? Have you been looking for a 24-bit high def audio output solution for your iPad?

Audiophile Nirvana Is Here

Then, welcome to audiophile nirvana with an Audioengine D3 Portable DAC & Headphone Amp, only $99 bucks at Amazon.

You will need your own USB C to A adapter, but all you need to do is plug it in and you’ve got 24-bit audio high def audio for your high-end headphones. In my case, it’s the AudioTechnica ATH-M50x.

I’ve used the D3 on my Macbook Pro 15″ (2013) and an Intel NUC. The iPad seems to perform better than either of those in terms of maximum volume.

Products Recommended by Vern

Make a Docker Lab With Linux, Mac and Windows

Here’s a quickie realization for folks like me who naively figured it would be easy to integrate my Windows and Mac VS Code users with Docker. This realization resulted in me building a bare-metal Linux box to make everything work a lot easier for our Docker lab.

Docker Is Easy, Usually

It’s easy for most engineers to use Docker if you are working on one platform. If you’re working exclusively in Windows, MacOS or Linux, then you’re probably not going to hit the speed bump I’m about to describe.

This advice will resonate for IT pros who need to integrate Docker into an enterprise with Mac and Windows developers.

Ignore The Windows Docker Strategy

Sometimes I get led astray in my devops studies because I am lured into a vendor strategy.

I see a shiny object that makes me feel better. I am like a fish chasing a lure because this newfangled vendor strategy promises me things will be glorious once I buy into the strategy. What really happens is that I get hooked on the vendor’s offering.

If you are working with Node, PHP,  or Golang to build a cloud app, then you should know that the Windows Docker strategy is crap. 

Docker Lab
This is the good way to make a Docker Lab

The containerization “strategy” announced by Docker and Microsoft in 2017 is a good example of vendors luring in IT pros with talk of nirvana. Here is that strategy in a nutshell: if you want to run Windows servers within a Docker container, that is now possible. You still need Hyper-V running underneath Docker on a Windows 2016 server, so whatever.

Hopefully this little realization will save someone else the time and bother I wasted going down a few rabbit holes.

Linux Rules Devops

As it is with all things devops, it is always best to go back to mother, i.e. Linux.

After studying the Docker documentation and tuning my network, I realized that I needed Docker to run on a dedicated Linux platform. If I told my Docker clients that DOCKER_HOST was the Linux server, I figured I might have a solution that worked! SPOILER ALERT — It does and it’s spectacular.

Here’s the real zinger that got me to set up a dedicated server. The documentation on setting up networks and exposing containers is for Linux. The Docker networking instructions give solutions using iptables in Linux.

Set Up Your Docker Lab Server

Take note that I am using an open, unauthenticated port on the Docker server for control communication, which Docker does not recommend. You can implement TLS security on your ports to tighten things up if needed.

I went with a bare metal Linux installation for a couple of reasons. First, Docker involves the use of virtualization technology, and it’s always best to avoid nesting virtualizations. Also, just about any spare PC will do for this lab setup. Even a five-year-old desktop with a 120 GB SSD will be an awesome Linux lab server. 

I only spent an hour sitting in the lab setting up my new server. I used the latest LTS version of Ubuntu, but several Linux distros may be used for your Docker Linux host. If you use another distro, then check for distribution-specific instructions for how to open Docker port 2375.

To set up a simple Mac and Windows Docker lab without security, follow these instructions.

  1. Start with a working VS Code installation on Mac and Windows.
  2. Install Docker locally on both Mac and Windows developer workstations.
  3. Integrate VS Code with Docker on Mac and Windows. Make sure you have the Docker extension installed and working properly.
  4. Prepare a bare-metal server from the distribution ISO with Ubuntu Server 18.04 LTS.
  5. Assign the server a fixed, private IP, such as
  6. Remove AppArmor on the Linux server to improve performance.
  7. Follow these instructions to install Docker-CE.
  8. To open up port 2375 update the following system files and reboot your server (source). 
# File: /etc/default/docker
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="tcp:// -H unix:///var/run/docker.sock"

# File: /lib/systemd/system/docker.service
## Add EnviromentFile + add "$DOCKER_OPTS" at end of ExecStart
## After change exec "systemctl daemon-reload"
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS

Update Mac and Windows Environments

Start configuring your clients by adding the following line to your  .zshrc and .bashrc files on the Mac:

export DOCKER_HOST=tcp://

On Windows, go into the System control panel, Advanced Settings, Environment Variables and add the following:


If you are using Windows Subsystem for Linux (WSL), and you use Docker with WSL, then add the export statement to your .zshrc and .bashrc files too.

Restart VS Code and any terminal or shell programs you have running. Launch a new shell and test it with docker info. You should see Ubuntu 18.04 OS listed in the output.


First, double-check the export statements and Environment Variable settings in your client environments. Make sure you have the “:2375” on the end.

If you have doubts whether you have successfully opened up port 2375 on your Linux server check the port manually. First, make sure you have telnet installed on your Windows, Mac or WSL. Issue this telnet command to see if the port on host is open.

$ telnet 2375

If the port is open, then telnet will continue to run and you will need to quit it with CTRL-C or CTRL-]. If the port is not open, then you will get a communication refused error message.

Celebration Time!

After installing your new host, disable the Docker daemons running on Mac and Windows. The Docker CLI works without the local servers running.

Now it’s time to bask in the glory of your conquest. Run into the next office and claim victory!

Start a PHP 7.2 Slim Project on Ubuntu 18.04

I use Slim, a lightweight PHP framework for creating HTTP applications and APIs using “routes.”

Here’s my formula for deploying my Slim app on Ubuntu 18.04 with PHP 7.2. This has worked on GCP and AWS, as well as my own hosted cluster.

Please note that I will be working with a raw sudo terminal session, so I will omit the use of sudo from these instructions.

One Fresh LAMP Image, Please

Let us start with a fresh installation of Ubuntu 18.04.

# important!
apt update
apt -y upgrade

I like using tasksel to install LAMP (Apache, MySql and PHP). tasksel is the menu you encounter when installing Ubuntu from an ISO. If I am installing on a cloud service, I don’t get the opportunity to use this menu, so I have to install it manually.

apt install -y tasksel
# Scroll down to LAMP Server
# Hit Spacebar to select
# Tab to the OK button and hit Enter

After I install MySql, I always secure it.

# Follow the prompts and accept all security recommendations

Use Certbot for Free SSL

Hooray for Certbot and Let’s Encrypt! Now it only takes a few minutes to configure Apache with SSL certificates.

Configure Public Domain Names

Super-important first step: assign a domain name you control to the public IP address of your hosted Ubuntu instance. The public clouds give you a public IP when you set up a new instance. Use that IP address to set up DNS A records for your host.

For example, if I have a domain called mydomain.com, and I want a host to be called api.mydomain.com and www.mydomain.com, and I want mydomain.com to work as well, and my public-facing IP address is, then I need these A records in my mydomain.com.db DNS zone file:

@    14400  IN  A
api  14400  IN  A
www 14400 IN A

Use Certbot to Install Let’s Encrypt Certificates

Start by installing Certbot and accepting the license terms.

add-apt-repository ppa:certbot/certbot
# Hit Enter to accept the terms
apt install -y python-certbot-apache

Run the certbot command as shown, entering all of your domain names. Enter your email address for identification and sign up for the EFF.org newsletter! Pick the option to automatically redirect your HTTP traffic to HTTPS.

certbot --apache -d mydomain.com -d www.mydomain.com -d api.mydomain.com
# Enter your email address
# Pick the option to redirect HTTP to HTTPS

Install PHP Modules and Composer

Slim uses the popular Composer module management system for PHP. I need a few PHP modules to get Composer to work with my Slim projects.

apt install -y composer zip php-curl php-xml php-mbstring php-zip

Load Project Files

For day-to-day work on a PHP/Slim project, I use a regular, unprivileged user account. I set up a new account with the adduser command. In this example the username vern is just an example. Select any username you want.

adduser vern
# Select a strong password
# Complete the "Full Name" field
# Hit Enter for the remaining prompts

Now, I need to impersonate the new user and load the project files from GitHub (or wherever I have my repository) into the project directory. After that I bring in all the dependent modules by running Composer.

In this example, I start a new Slim project called myproject using the Slim Skeleton repository.

cd ~vern
su vern
git clone https://github.com/slimphp/Slim-Skeleton.git myproject
cd myproject
composer install

The last step in developer account preparation is to give Apache ownership of the log directory. Change vern to your developer account name.

chown www-data:www-data /home/vern/myproject/logs

Configure Apache for Slim

Edit the Apache SSL configuration file that was generated by Certbot:

vi /etc/apache2/sites-enabled/000-default-le-ssl.conf

The contents should like like this.

<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
ServerName api.mydomain.com
SSLCertificateFile /etc/letsencrypt/live/api.mydomain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/api.mydomain.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf

Change the DocumentRoot directive to point to the project’s public directory.

DocumentRoot /home/vern/myproject/public

Add the following <Directory> directive before the </VirtualHost> tag.

<Directory "/home/vern/myproject/public">
  Options Indexes FollowSymLinks MultiViews
   AllowOverride all
   Require all granted
   <IfModule mod_rewrite.c>
    RewriteEngine on
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteRule ^(.*)$ index.php?_url=/$1 [QSA,L]

Finally, your 000-default-le-ssl.conf file should look like this:

<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin webmaster@localhost
DocumentRoot /home/vern/myproject/public
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
ServerName api.telnexus.com
SSLCertificateFile /etc/letsencrypt/live/api.mydomain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/api.mydomain.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
<Directory "/home/vern/myproject/public">
   Options Indexes FollowSymLinks MultiViews
   AllowOverride all
   Require all granted
   <IfModule mod_rewrite.c>
     RewriteEngine on
     RewriteCond %{REQUEST_FILENAME} !-f
     RewriteRule ^(.*)$ index.php?_url=/$1 [QSA,L]

Save and restart Apache.

apache2ctl restart

Bask In The Glory!

Fire up your browser and go to https://api.mydomain.com/ and you should see the Slim default page.

Salesforce Apex Beautified in VS Code with Uncrustify

Why Beautify?

As a Salesforce Apex coder I admit to being a little persnickety when it comes to my code. Who doesn’t want their code to look just the way you want it to? But, when working in teams personal coding habits can lead to conflict.

Don’t do this! Use a beautifier instead!

What to do? Your dev team can’t have internal battles over spaces or tabs!

Fortunately, long before Silicon Valley parodied the anal-retentive nature of coders, a technical solution has been figured out. It’s called a code beautifier and it’s built right into today’s hottest IDE: Microsoft VS Code.

The idea behind using a code beautifier and using a coding standard for your code appearance is to standardize your formatting for the benefit of your fellow coders. We all know how one gets used to how curly braces are used in as class or method definition. If your coding buddy doesn’t have the same philosophy, it creates angst and conflict when you have to re-wire your brain to read this mess!

JavaScript and TypeScript coders have the benefit of prettier, a Microsoft-supplied VS Code extension that will auto-format your JavaScript code.

To beautify Salesforce Apex in VS Code one needs to recite some magical incantations with a new extension called Uncrustify.

While no one has made an Apex-specific beautifier yet, we can use the VS Code extension uncrustify and it’s ability to format Java, a close cousin of Apex. The trick is to tell uncrustify to treat Apex files like Java.

Steps To Auto-Format Apex in VS Code with Uncrustify

  1. Visit and star the vscode-uncrustify Github repository to show your appreciation!
  2. Linux users download and install the repo. Mac users install with brew install uncrustify or see http://macappstore.org/uncrustify. Windows users download the binary from Sourceforge and install it in your PATH.
  3. Install the Uncrustify VS Code Extension and reload.
  4. Set up a default configuration file in your current workspace with
    uncrustify.create command.
  5. Tell uncrustify to treat Apex like Java with this setting:
{    "uncrustify.langOverrides": {
        "apex": "JAVA"

That is it! Now the VS Code format command should format your document. Select part of your file, right click and you’ll have a
Format Selection command available. Be sure to check the read.me and learn about all options.

Now your code will be beautiful and your team can resume their fight about your tech stack!