This article is translated from the JETBRAINS Blog's JavaScript Best Practices, originally authored by David Watson
JavaScript is undoubtedly the most widely used programming language in the world and has a tremendous impact on one of the greatest technologies we rely on—the internet. This capability comes with great responsibility, and the JavaScript ecosystem has been rapidly evolving, making it extremely difficult to keep up with the latest JavaScript best practices.
In this blog post, we will introduce several key best practices in modern JavaScript for writing cleaner, more maintainable, and higher-performing code.
Project-defined practices outweigh all other practices#
The project you are coding for may have its own strict rules. Project rules make more sense than any advice in any best practices article—including this one! If you want to use a specific practice, ensure it aligns with the project rules and codebase, and make sure everyone on the team is involved.
Use the latest JavaScript#
JavaScript was invented on December 4, 1995. Since then, it has been continuously evolving. Online, you can find a lot of outdated advice and practices. Be careful and verify whether the practices you want to use are up to date.
Additionally, be cautious when using the latest JavaScript features. It’s best to start using new JavaScript features that have at least reached Ecma TC39 Stage 3.
That said, here are some currently common JavaScript best practices:
Declare variables#
You may encounter code that uses many var
declarations. This may be intentional, but if it's old code, it could be because this was the old way.
Recommendation: Use let
and const
instead of var
to declare variables.
Why this is important: Although var
is still available, let
and const
provide block scope, which is more predictable and reduces the accidental errors that can occur when declaring variables with var
(function scope).
for (let j = 1; j < 5; j++) {
console.log(j);
}
console.log(j);
// you get 'Uncaught ReferenceError: j is not defined'
//If we did this using var:
for (var j = 1; j < 5; j++) {
console.log(j);
}
// <-- logs the numbers 1 to 4
console.log(j);
//You’d get 5 as it still exists outside the loop
Classes instead of functions: Prototypes#
In many old codebases or articles about OOP in JavaScript, you may encounter the function prototype method used to simulate classes. For example:
function Person(name) {
this.name = name;
}
Person.prototype.getName = function () {
return this.name;
}
const p = new Person('A');
console.log(p.getName()); // 'A'
Recommendation: This method uses constructors to control the prototype chain. However, in this case, using classes is almost always better.
class Person {
constructor(name) {
this.name = name;
}
getName() {
return this.name;
}
}
const p = new Person('A');
console.log(p.getName()); // 'A'
Why this is important: The main reason to use classes is that they have clearer syntax.
Private class fields#
In older JavaScript code, it was common to use an underscore (_) as a convention to indicate private properties or methods in a class. However, this does not actually enforce privacy—it merely signals to developers that something should be private.
class Person {
constructor(name) {
this._name = name; // Conventionally treated as private, but not truly private
}
getName() {
return this._name;
}
}
const p = new Person('A');
console.log(p.getName()); // 'A'
console.log(p._name); // 'A' (still accessible from outside)
Recommendation: When you truly need private fields in a class, JavaScript now has #
syntax for real private fields. This is an official language feature that enforces true privacy.
Arrow function expressions#
Arrow functions are often used to make callback functions or anonymous functions more concise and readable. They are particularly useful when using higher-order functions like map
, filter
, or reduce
.
const numbers = [1, 2];
// Using arrow function
numbers.map(num => num * 2);
// Instead of
numbers.map(function (num) {
return num * 2;
});
Recommendation: Arrow functions provide a more concise syntax, especially when the function body is a single expression. They also automatically bind the this
context, which is particularly useful in class methods, as this
can easily be lost.
Why this is important: Arrow functions improve readability by removing boilerplate code, making callback functions and inline expressions more concise. Additionally, they are particularly valuable when using classes or event handlers, as they automatically bind this
to the surrounding lexical scope. This avoids common mistakes related to this
in traditional function expressions, especially in asynchronous or callback-heavy code.
Nullish coalescing (??)#
In JavaScript, developers often use the logical OR (||
) operator to assign default values when a variable is undefined
or null
. However, this can lead to unexpected behavior when the variable holds values like 0
, false
, or an empty string (""
), as ||
treats them as falsy and replaces them with the default value.
For example:
const value = 0;
const result = value || 10;
console.log(result); // 10 (unexpected if 0 is a valid value)
Recommendation: When resolving default values, use the nullish coalescing operator (??
) instead of ||
. It only checks for undefined
or null
, while other falsy values (like 0
, false
, ""
) remain unchanged.
const value = 0;
const result = value ?? 10;
console.log(result); // 0 (expected behavior)
Why this is important: The ??
operator provides a more precise way to handle default values when null
or undefined
should trigger a fallback. It prevents errors that can arise from using ||
, as ||
may inadvertently override valid falsy values. Using nullish coalescing leads to more predictable behavior, enhancing code clarity and reliability.
Optional chaining (?.):#
When dealing with deeply nested objects or arrays, you often have to check whether each property or array element exists before trying to access the next level. Without optional chaining, this requires verbose and repetitive code.
For example:
const product = {};
// Without optional chaining
const tax = (product.price && product.price.tax) ?? undefined;
Recommendation: The optional chaining operator (?.
) simplifies this process by automatically checking if a property or method exists before trying to access it. If any part of the chain is null or undefined, it returns undefined instead of throwing an error.
const product = {};
// Using optional chaining
const tax = product?.price?.tax;
Why this is important: Optional chaining reduces the amount of boilerplate code and makes handling deeply nested structures easier. It ensures your code is cleaner and less error-prone by gracefully handling null or undefined values without requiring multiple checks. This enhances readability and maintainability, especially when dealing with dynamic data or complex objects.
async/await#
In older JavaScript, handling asynchronous operations often relied on callbacks or promise chaining, which quickly led to complex and hard-to-read code. For example, using .then()
for promise chains can make the flow harder to follow, especially with multiple asynchronous operations:
function fetchData() {
return fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => {
console.log(data);
})
.catch(error => {
console.error(error);
});
}
Recommendation: Use async
and await
to make asynchronous code look more like regular synchronous code. This improves readability and makes error handling with try...catch
easier.
async function fetchData() {
try {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
console.log(data);
} catch (error) {
console.error(error);
}
}
Why this is important: The async/await
syntax eliminates the need for chaining .then()
and .catch()
, simplifying asynchronous operations. It makes your code more readable, maintainable, and understandable, especially when dealing with multiple asynchronous calls. Using try...catch
for error handling is also more straightforward, resulting in clearer and more predictable logic.
Interacting with object keys and values#
In older JavaScript code, interacting with object keys and values often involved manually looping with for...in
or Object.keys()
, then accessing values via bracket notation or dot notation. This can lead to verbose and less intuitive code.
const obj = { a: 1, b: 2, c: 3 };
// Older approach with Object.keys()
Object.keys(obj).forEach(key => {
console.log(key, obj[key]);
});
Recommendation: Use modern methods like Object.entries()
, Object.values()
, and Object.keys()
to handle object keys and values. These methods simplify the process and return useful structures (like arrays), making your code cleaner and easier to work with.
const obj = { a: 1, b: 2, c: 3 };
// Using Object.entries() to iterate over key-value pairs
Object.entries(obj).forEach(([key, value]) => {
console.log(key, value);
});
// Using Object.values() to work directly with values
Object.values(obj).forEach(value => {
console.log(value);
});
Why this is important: Using modern object methods (like Object.entries()
, Object.values()
, and Object.keys()
) results in clearer and more readable code. These methods reduce the boilerplate needed to iterate over objects and enhance code clarity, especially when dealing with complex or dynamic data structures. They also facilitate easier conversion of objects to other forms (like arrays), making data manipulation more flexible and efficient.
Checking if a variable is an array#
In the past, developers used various indirect methods to check if a variable is an array. These methods included checking the constructor or using instanceof
, but they were often unreliable, especially when dealing with different execution contexts (like iframe
).
const arr = [1, 2, 3];
// Older approach
console.log(arr instanceof Array); // true, but not always reliable across different contexts
Recommendation: Use the modern Array.isArray()
method, which provides a simple and reliable way to check if a variable is an array. This method works consistently across different environments and execution contexts.
const arr = [1, 2, 3];
console.log(Array.isArray(arr)); // true
Why this is important: Array.isArray()
is a clear, readable, and reliable way to check for arrays. It eliminates the need for verbose or error-prone methods like instanceof
, ensuring your code correctly handles array detection, even in complex or cross-environment scenarios. This reduces errors and makes behavior more predictable when working with different types of data structures.
Map#
In early JavaScript, developers often used plain objects to map keys to values. However, this approach has limitations, especially when keys are not strings or symbols. Plain objects can only use strings or symbols as keys, making it cumbersome and error-prone if you need to map non-primitive objects (like arrays or other objects) to values.
const obj = {};
const key = { id: 1 };
// Trying to use a non-primitive object as a key
obj[key] = 'value';
console.log(obj); // Object automatically converts key to a string: '[object Object]: value'
Recommendation: Use Map
when you need to map non-primitive objects or require a more powerful data structure. Unlike plain objects, Map
allows any type of value (both primitive and non-primitive) as keys.
const map = new Map();
const key = { id: 1 };
// Using a non-primitive object as a key in a Map
map.set(key, 'value');
console.log(map.get(key)); // 'value'
Why this is important: Map
provides a more flexible and predictable way to associate values with any type of key (whether primitive or non-primitive). It retains the type and order of keys, while plain objects convert keys to strings. This makes handling key-value pairs more powerful and efficient, especially when dealing with complex data or needing quick lookups in larger collections.
Hiding values with symbols#
In JavaScript, objects are often used to store key-value pairs. However, when you need to add "hidden" or unique values to an object without risking name collisions with other properties, or if you want to keep them private from external code, using Symbol can be very useful. Symbols create unique keys that cannot be accessed through normal enumeration or accidental property lookup.
const obj = { name: 'Alice' };
const hiddenKey = Symbol('hidden');
obj[hiddenKey] = 'Secret Value';
console.log(obj.name); // 'Alice'
console.log(obj[hiddenKey]); // 'Secret Value'
Recommendation: Use Symbol
when you want to add non-enumerable hidden properties to an object. Symbols cannot be accessed during typical object operations (like for...in
loops or Object.keys()
), making them ideal for internal or private data that should not be exposed accidentally.
const obj = { name: 'Alice' };
const hiddenKey = Symbol('hidden');
obj[hiddenKey] = 'Secret Value';
console.log(Object.keys(obj)); // ['name'] (Symbol keys won't appear)
console.log(Object.getOwnPropertySymbols(obj)); // [Symbol(hidden)] (accessible only if specifically retrieved)
Why this is important: Symbols allow you to safely add unique and "hidden" properties to objects without worrying about key collisions or exposing internal details to other parts of the codebase. They are particularly useful in libraries or frameworks where you may need to store metadata or internal state without affecting or interfering with other properties. This ensures better encapsulation and reduces the risk of accidental overrides or misuse.
Check the Intl API before using additional formatting libraries#
In the past, developers often relied on third-party libraries to handle tasks like formatting dates, numbers, and currencies for different locales. While these libraries provide powerful functionality, they can add extra overhead to your project and may duplicate functionality already built into JavaScript.
// Using a library for currency formatting
const amount = 123456.78;
// formatLibrary.formatCurrency(amount, 'USD');
Recommendation: Before using external libraries, consider using the built-in ECMAScript Internationalization API (Intl
). It provides powerful built-in functionality for formatting dates, numbers, currencies, etc., based on locale. This often meets most of your internationalization and localization needs without the extra overhead of third-party libraries.
const amount = 123456.78;
// Using Intl.NumberFormat for currency formatting
const formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' });
console.log(formatter.format(amount)); // $123,456.78
Why this is important: The Intl
API provides native and highly optimized support for internationalization, allowing you to avoid importing large libraries for simple formatting needs. By using built-in functionality, you can keep your project lightweight, reduce dependencies, and still provide comprehensive locale-based formatting solutions. This not only improves performance but also reduces the maintenance burden associated with third-party libraries.
Common practices#
Now, let’s look at some common practices that should be best practices.
Use strict equality (===) if possible#
One of the trickiest and most surprising behaviors in JavaScript comes from the loose equality operator (==
). It performs type coercion, meaning it tries to convert the operands to the same type before comparing them. This can lead to strange and unexpected results, as famously illustrated in Brian Leroux's talk about "WTFJS":
console.log([] == ![]); // returns true (what a surprise?!)
In this case, the loose equality operator (==
) converts both sides in unexpected ways, leading to non-intuitive results.
Recommendation: Use strict equality (===
) instead of loose equality (==
) whenever possible. Strict equality does not perform type coercion—it directly compares both value and type, resulting in more predictable and reliable behavior.
console.log([] === ![]); // returns false (as expected)
Here’s a more typical example to highlight the difference:
// Loose equality (==) performs type coercion
console.log(0 == ''); // true
// Strict equality (===) compares both value and type
console.log(0 === ''); // false (as expected)
Why this is important: Using strict equality (===
) helps avoid unexpected behavior from type coercion in JavaScript. It makes comparisons more predictable and reduces the risk of subtle bugs, especially when dealing with different data types like numbers, strings, or booleans. Defaulting to ===
is a good practice unless you have a specific reason to use loose equality and understand its implications.
Explicitly handle expressions in if statements:#
In JavaScript, if
statements implicitly convert the result of their evaluated expressions to "truthy" or "falsy" values. This means that values like 0
, ""
(empty string), null
, undefined
, and false
are treated as falsy, while most other values (even values like []
or {}
) are truthy. This implicit conversion can sometimes lead to unexpected results if not careful.
For example:
const value = 0;
if (value) {
console.log('This will not run because 0 is falsy.');
}
Recommendation: It’s a good practice to explicitly state the condition in an if statement, especially when the values you are using may behave unexpectedly in truthy/falsy evaluations. This makes the code more predictable and easier to understand.
For example, avoid relying on implicit type coercion:
const value = 0;
// Implicit check (may behave unexpectedly for some values)
if (value) {
console.log('This won’t run');
}
You can use an explicit condition:
// Explicitly check for the type or value you expect
if (value !== 0) {
console.log('This will run only if value is not 0.');
}
Or, when checking for null
or undefined
:
const name = null;
if (name != null) { // Explicitly checking for null or undefined
console.log('Name is defined');
} else {
console.log('Name is null or undefined');
}
Why this is important: By explicitly defining the conditions in your if statements, you can reduce the likelihood of unexpected behavior due to JavaScript's automatic type coercion. This makes your code clearer and helps prevent errors when using potentially ambiguous values (like 0
, false
, null
, or ""
). Clearly stating the conditions you want to check is a good practice, especially in complex logic.
Avoid using the built-in Number for sensitive calculations#
JavaScript's built-in Number
type is based on the IEEE 754 standard for floating-point numbers. While this works well for most purposes, it can lead to surprising inaccuracies, especially in decimal arithmetic. This is not a problem unique to JavaScript, but it can cause serious issues when dealing with sensitive data (like financial calculations).
For example, you might encounter this famous floating-point issue:
console.log(0.1 + 0.2); // 0.30000000000000004
Recommendation: When precision is critical (such as in financial calculations), avoid using the standard Number
type for arithmetic operations. Instead, use specialized libraries like decimal.js
or big.js
, which are designed to handle precise decimal calculations without floating-point errors.
Here’s how it works with libraries like decimal.js
:
const Decimal = require('decimal.js');
const result = new Decimal(0.1).plus(0.2);
console.log(result.toString()); // '0.3'
These libraries ensure the accuracy of calculations and that rounding errors do not affect the results, making them ideal for handling sensitive tasks like money.
Why this is important: Inaccurate calculations can lead to serious problems when dealing with financial data, where even small discrepancies can have significant impacts. JavaScript's floating-point math can yield unexpected results, and while the language is continually improving, it’s best to rely on libraries like decimal.js
or big.js
to ensure precision. By using these libraries, you can avoid common pitfalls and ensure your calculations are accurate, reliable, and suitable for critical applications.
Be cautious with JSON and large integers#
JavaScript has limitations when handling very large numbers. The maximum safe integer in JavaScript is 9007199254740991
(also known as Number.MAX_SAFE_INTEGER
). Numbers larger than this may lose precision and produce incorrect results. This can become an issue when using APIs or systems outside of JavaScript, as large numbers (like database ID
fields) can easily exceed JavaScript's safe range.
For example, when parsing JSON containing large numbers:
console.log(
JSON.parse('{"id": 9007199254740999}')
);
// Output: { id: 9007199254741000 } (precision loss)
Recommendation: When dealing with large numbers in JSON data, use the reviver
parameter of JSON.parse()
to handle specific values (like id
fields) and store them in a safe format (like strings).
console.log(
JSON.parse(
'{"id": 9007199254740999}',
(key, value, ctx) => {
if (key === 'id') {
return ctx.source; // Preserve the original value as a string
}
return value;
}
)
);
// Output: { id: '9007199254740999' }
Using BigInt: JavaScript introduced BigInt
to safely handle numbers larger than Number.MAX_SAFE_INTEGER
. However, BigInt
cannot be directly serialized to JSON. If you try to stringify an object containing BigInt
, you will receive a TypeError
:
const data = { id: 9007199254740999n };
try {
JSON.stringify(data);
} catch (e) {
console.log(e.message); // 'Cannot serialize BigInt'
}
To address this, use the replacer parameter in JSON.stringify() to convert BigInt
values to strings:
const data = { id: 9007199254740999n };
console.log(
JSON.stringify(data, (key, value) => {
if (typeof value === 'bigint') {
return value.toString() + 'n'; // Append 'n' to denote BigInt
}
return value;
})
);
// Output: {"id":"9007199254740999n"}
⚠️ Important Note: When using these techniques to handle large integers in JSON, ensure that both your application’s client and server agree on how to serialize and deserialize the data. For example, if the server sends the id in a specific string format or as a BigInt, the client must be prepared to handle that format during deserialization.
Why this is important: JavaScript's numeric precision limitations can lead to serious errors when dealing with large numbers from external systems. By using techniques like BigInt
and the reviver/replacer
parameters of JSON.parse()
and JSON.stringify()
, you can ensure that large integers are handled correctly, preventing data corruption. This is especially crucial when precision is vital, such as when dealing with large IDs or financial transactions.
Use JSDoc to help code readers and editors#
When using JavaScript, function and object signatures often lack documentation, making it harder for other developers (or even your future self) to understand what parameters and objects contain or how to use the functions. Without proper documentation, code can become ambiguous, especially when the structure of objects is unclear:
For example:
const printFullUserName = user =>
// Does user have the `middleName` or `surName`?
`${user.firstName} ${user.lastName}`;
In this case, there is no documentation to clarify what properties the user object should have. Does user
contain middleName
? Should it use surName
instead of lastName
?
Recommendation: By using JSDoc, you can define the expected structure of objects, function parameters, and return types. This makes it easier for code readers to understand functionality and helps code editors provide better autocompletion, type checking, and tooltips.
Here’s how to improve the previous example using JSDoc:
/**
* @typedef {Object} User
* @property {string} firstName
* @property {string} [middleName] // Optional property
* @property {string} lastName
*/
/**
* Prints the full name of a user.
* @param {User} user - The user object containing name details.
* @return {string} - The full name of the user.
*/
const printFullUserName = user =>
`${user.firstName} ${user.middleName ? user.middleName + ' ' : ''}${user.lastName}`;
Importance: JSDoc improves code readability and maintainability by clearly indicating the expected types of values required in functions or objects. It also enhances the developer experience by enabling autocompletion and type checking in many editors and IDEs (like Visual Studio Code or WebStorm). This reduces the likelihood of errors and makes it easier for new developers to join and understand the code.
With JSDoc, editors can provide hints, autocompletion for object properties, and even warnings when developers misuse functions or provide incorrect parameter types, making your code more understandable and robust.
Use tests#
As codebases grow, manually verifying that new changes do not break important functionality becomes very time-consuming and error-prone. Automated tests help ensure that your code works as expected and allow you to make changes with confidence.
In the JavaScript ecosystem, there are many testing frameworks available, but starting with Node.js version 20, you no longer need an external framework to begin writing and running tests. Node.js now includes a built-in stable test runner.
Here’s a simple example using Node.js's built-in test runner:
import { test } from 'node:test';
import { equal } from 'node:assert';
// A simple function to test
const sum = (a, b) => a + b;
// Writing a test for the sum function
test('sum', () => {
equal(sum(1, 1), 2); // Should return true if 1 + 1 equals 2
});
You can run this test with the following command:
node --test
This built-in solution simplifies the process of writing and running tests in a Node.js environment. You no longer need to configure or install other tools like Jest or Mocha, although these options are still very useful for larger projects.
E2E testing in the browser: For end-to-end (E2E) testing in the browser, Playwright is an excellent tool that allows you to easily automate and test interactions within the browser. With Playwright, you can test user flows, simulate interactions across multiple browsers (like Chrome, Firefox, and Safari), and ensure your application behaves as expected from the user's perspective.
Other environments: Two alternative JavaScript runtimes, Bun and Deno, also provide built-in test runners similar to Node.js, making it easy to write and run tests without additional setup.
Why this is important: Writing tests can save time in the long run, as it helps catch errors early and reduces the need for manual testing after every change. It also gives you confidence that new features or refactoring won’t introduce regressions. In fact, modern runtimes like Node.js, Bun, and Deno include built-in test runners, meaning you can start writing tests immediately with minimal setup. Testing tools like Playwright help ensure your application runs seamlessly in real browser environments, adding extra assurance for critical user interactions.
Final thoughts#
While it may seem like there’s a lot to learn, we hope this gives you insight into areas you may not have considered and wish to implement in your JavaScript projects. Again, feel free to bookmark this content and refer back to it whenever needed. JavaScript conventions are constantly changing and evolving, as are frameworks. Keeping up with the latest tools and best practices will continuously improve and optimize your code, but it can be challenging to do so. We recommend keeping an eye on the trends in ECMAScript versions, as this often points to new conventions that are widely adopted in the latest JavaScript code. TC39 typically proposes recommendations for the latest ECMAScript versions, and you can follow these proposals.
By adopting these modern JavaScript best practices, you can not only write usable code but also create clearer, more efficient, and maintainable solutions. Whether it’s using newer syntax like async/await
, avoiding pitfalls with floating-point numbers, or leveraging the powerful Intl
API, these practices will help you stay up to date and confident in your codebase. As the JavaScript ecosystem continues to evolve, taking the time now to adopt best practices will spare you future headaches and prepare you for long-term success.
That’s all for today! We hope this article has been helpful—feel free to comment with questions, discussions, and suggestions. Happy coding!