Every React developer will, at some point, need to make sure their applications are safe from security vulnerabilities. Ideally, security should be thought about from the start of a project. However, it often becomes important when developers prepare an application for its initial public release or submit it for penetration testing.
The first step many take is to look at the OWASP Top 10 list, but this can lead to confusion. Many of the items on the list are about the backend, and it seems like React already handles most of the frontend concerns. Does that mean we’re done? Not at all! I’m going to share a list of things you should check to make sure your application is secure. While I’m focusing on TypeScript, most of what I’ll mention applies to JavaScript apps as well.
XSS Prevention
Even though React has strong tools for guarding against XSS vulnerabilities, you can still accidentally write code that’s not safe from attacks.
dangerouslySetInnerHTML
This method is the most direct route to introducing an XSS vulnerability. Despite the name glaringly signaling to developers the potential security implications, we are all aware of the lengths to which developers might go to meet deadlines under the persuasive pressure of experienced managers.
import React from 'react';
interface MessageFooterProps {
userFooterHTML: string;
}
const MessageFooter: React.FC<MessageFooterProps> = ({ userFooterHTML }) => {
// This is an unsafe practice and should not be used in production code
return <div dangerouslySetInnerHTML={{ __html: userFooterHTML }} />;
};
export default MessageFooter;
The best approach might be to reconsider whether allowing users to insert custom HTML in the footer is truly necessary. However, if circumstances necessitate such a feature, using a library that utilizes a whitelist of HTML tags and attributes—stripping away all else—can be a good solution. DOMPurify is one such library that serves this purpose. Let’s explore an example:
Copy code
import React from 'react';
import DOMPurify from 'dompurify';
interface MessageFooterProps {
userFooterHTML: string;
}
const MessageFooter: React.FC<MessageFooterProps> = ({ userFooterHTML }) => {
const sanitizedHTML = DOMPurify.sanitize(userFooterHTML, {ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a']}); // Adjust allowed tags as necessary
return <div dangerouslySetInnerHTML={{ __html: sanitizedHTML }} />;
}
export default MessageFooter;
Embedding User Input in JavaScript
You can embed JavaScript directly within React components, but it’s better to avoid it. If it’s not properly sanitized and includes any user-provided data, it can lead to XSS vulnerabilities.
// Example of an insecure practice
<script>{`let userData = ${userInput};`}</script>
Inline Event Handlers
Injecting user input into event handlers can also be risky if not properly handled.
// Example of an insecure practice
<div onClick={() => executeUserProvidedCode(userInput)}>Click me</div>
Using eval function
The eval() function in JavaScript is another avenue for XSS as it runs strings as code.
// Example of an insecure practice
eval(userInput);
Third-party library and components
Sometimes third-party libraries or components might have vulnerabilities, and if user input is passed to them without proper sanitization, it could lead to security risks.
Every time you use an external component to display user-provided data, test it and add the scenario to your regression test cases. This ensures that the field is not vulnerable to XSS attacks, at least with the simplest test string like
<script>alert(123)</script>
Not Sanitizing User Input
Any time user input is used within the application and is not properly sanitized or validated, it could potentially lead to an XSS vulnerability, especially when dealing with HTML, JavaScript, or URLs.
URL Parameters and Routing:
If user-provided URL parameters are injected into the DOM without sanitization, this can also lead to XSS vulnerabilities.
// Example of an insecure practice
<a href={userProvidedURL}>Link</a>
HTTP Headers
To ensure clarity on which request’s header we are referring to, let’s examine the typical request-response lifecycle of a React application.
When a user enters the application’s URL, the browser sends an initial request to the server housing our code. This server responds with an HTML file—let’s name this the ‘initial response’—which contains links to JavaScript files that boot up your application. Once the app is alive, it usually makes multiple API calls to various endpoints; these could be housed on the same server or a different one, and we’ll refer to these as ‘API requests’ and ‘API responses’.
CORS configuration
CORS (Cross-Origin Resource Sharing) configuration informs the web browser when it should restrict web applications from making requests to different domains, serving as a vital security measure. It shields our API from unauthorized web applications, which is critical because a user could be logged into a secure application in one browser tab and unknowingly open a malicious one in another. Without proper browser protection, this malicious application could exploit the user’s active session with the secure application to make unauthorized API requests, retrieve confidential data, or enact unauthorized changes. Typically, web browsers implement the same-origin policy by default, allowing scripts to run on pages originating from the same site—a combination of scheme, hostname, and port number—but there are valid reasons for applications to make cross-origin requests. For instance, an application at https://example.com might need to access https://api.example.com, or there could be a need to use multiple APIs of distributed systems. To dictate who and how one can connect to an API, CORS headers must be employed.
CORS configuration is included in API response headers:
Access-Control-Allow-Origin
Specifies which origin(s) are allowed to access the resource. It can be a specific single origin, or a wildcard (*), which allows any origin. If server accepts multiple origins then must return an origin related to the source of the request.
Access-Control-Allow-Origin: *
Access-Control-Allow-Origin: https://example.com
Access-Control-Allow-Methods
Lists the HTTP methods (GET, POST, PUT, DELETE, etc.) that are allowed when accessing the resource or a wildcard (*), which allows any methods.
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Methods: *
Access-Control-Allow-Headers
Used in response to a preflight request (preflight requests are explained a bit further) to indicate which HTTP headers can be used during the actual request or a wildcard (*) that allows any headers.
Access-Control-Allow-Headers: X-Custom-Haader, Authorization, Content-Type
Access-Control-Allow-Headers: *
Access-Control-Allow-Credentials
Indicates whether the browser should include credentials with requests.
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers
Indicates which headers can be exposed as part of the response by listing their names or a wildcard (*) that allows to expose any headers.
Access-Control-Expose-Headers: Content-Encoding, Kuma-Revision
Access-Control-Expose-Headers: *
Access-Control-Max-Age
Indicates how long (in seconds) the results of a preflight request can be cached.
Access-Control-Max-Age: 600
GET request example
Let’s examine an example of a GET request made from a web application to an API located in a subdomain:
Request headers
GET /users/ HTTP/1.1
Host: api.example.com
Origin: https://example.com
...
Response headers
HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://example.com
...
This informs the browser that the response can be shared with the application located at the domain https://example.com
POST, PUT, DELETE requests
For GET requests, things are relatively straightforward. However, when it comes to HTTP methods that modify the state of the application, such as POST, it’s a different story. Browsers perform a preliminary check to ensure they’re permitted to make a POST request by issuing a preflight OPTIONS request. Only after receiving the appropriate Access-Control-Allow-Origin
headers in response to this preflight request will the browser proceed with the actual POST request
OPTIONS request headers
OPTIONS /users HTTP/1.1
Host: api.example.com
Access-Control-Request-Method: POST
Access-Control-Request-Headers: Content-Type
...
OPTIONS response header
HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://example.com
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Content-Type
...
POST request headers
POST /users/ HTTP/1.1
Host: api.example.com
Origin: https://example.com
Content-Type: application/json
...
POST response headers
HTTP/1.1 201 Created
Access-Control-Allow-Origin: https://example.com
Content-Type: application/json
...
As illustrated, methods that can alter the state of our system have protective measures against unauthorized execution. This underscores the importance of ensuring that GET methods in your API do not modify the system state, adhering to the principles of RESTful design.
CSP configuration
Content Security Policy (CSP) is a computer security standard designed to prevent cross-site scripting, clickjacking, and other code injection attacks that could execute malicious content in a trusted web application environment. It is implemented via the Content-Security-Policy
header, typically included in the initial response from the web server.
It defines a whitelist of sources that the application is allowed to load resources from. It’s constructed using directives, each of which defines the policy for a certain resource type. Available directives include:
Directive | Description |
default-src | A general directive that is used when more specific resource directive is not defined |
script-src | Defines which scripts the protected resource can execute. |
style-src | Specifies allowable sources of stylesheets. |
img-src | Defines allowable sources of images. |
connect-src | Controls which URLs the document can connect to via scripts (fetch, XHR, WebSockets, etc) |
frame-src | Specifies valid sources for embedded iframes. |
object-src | Specifies valid sources for the <object>, <embed>, and <applet> elements. |
Let’s have a look at an example header:
Content-Security-Policy: default-src 'self'; connect-src 'self' https://api.example.com; img-src 'self' https://images.example.com; style-src 'self' 'unsafe-inline';
Directive | Value / Values |
default-src | ‘self’ |
connect-src | ‘self’ https://api.example.com |
img-src | ‘self’ https://images.example.com |
style-src | ‘self’ ‘unsafe-inline’ |
This example allows loading resources from the same origin (domain). Additionally, API calls can be made to the origin https://api.example.com, and images can be loaded from https://images.example.com. The styles can be loaded from their own origin, including inline styles (‘unsafe-inline’). By default, CSP restricts inline styles and scripts, and it’s not recommended to use them in your application, as it can potentially introduce vulnerabilities to XSS attacks by allowing the injection of inline code.
If you want to perform a ‘dry run’ of your Content Security Policy (CSP) before enforcing it to ensure that the defined policy allows the application access to all required resources, you can use the Content-Security-Policy-Report-Only header. This allows you to define a CSP policy that will not block access to restricted resources but will instead report violations of the CSP to a defined endpoint.
Content-Security-Policy-Report-Only: default-src 'self'; report-uri /csp-report-endpoint/
Strict-Transport-Security
The Strict-Transport-Security header instructs browsers to always use HTTPS instead of HTTP when communicating with the server, preventing the potential for downgrading to an insecure connection. Once a browser receives this header, it will always use HTTPS for subsequent requests to the server, including AJAX requests. This helps protect against man-in-the-middle attacks on downgraded communication.
max-age
max-age
tells the browser how long, in seconds, it should remember that the site is to be accessed using only the HTTPS protocol. In the example below, it’s set for one year (60 x 60 x 24 x 365).
Strict-Transport-Security: max-age=31536000
includeSubDomains
The includeSubDomains directive is an optional parameter that tells the browser to apply the rule to all of the site’s subdomains.
Strict-Transport-Security: max-age=31536000; includeSubDomains
Preloading
Websites can be included in the HSTS preload list, a list hard-coded into browsers ensuring that they always access these sites over HTTPS from the very first visit. To qualify for preloading, the preload directive can be used provided that max-age is set for at least 365 days and includeSubDomains is specified.
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
HSTS lists:
- https://www.chromium.org/hsts (Chrome)
- https://hg.mozilla.org/mozilla-central/raw-file/tip/security/manager/ssl/nsSTSPreloadList.inc (Firefox)
X-Content-Type-Options
When browsers receive data, they typically get a MIME type specified in the response header. However, to handle potential server misconfigurations and enhance user experience, browser vendors introduced MIME type sniffing
(or Content type sniffing
). This means the browser will interpret the content of the response and process it based on its perceived format, rather than strictly adhering to the MIME type declared in the header.
This introduces a security risk. If malicious JavaScript is embedded in a response intended to be another format, MIME type sniffing could lead the browser to treat the response as text/html
rather than, say, application/json
. Consequently, the browser might execute malicious code that wouldn’t have run if the content were strictly treated as application/json.
The X-Content-Type-Options header allows us to disable the browser’s MIME type sniffing mechanism, thereby instructing it to strictly adhere to the declared Content-Type header. This header has only one valid value: nosniff.
X-Content-Type-Options: nosniff
The X-Content-Type-Options: nosniff
header should ideally be set on all HTTP responses that can potentially be acted upon by a user agent (browser). This includes not just the initial page load (the “init call”) but also any AJAX/API responses that return content which may be misinterpreted by the browser’s MIME sniffing algorithm.
Referrer-Policy
When a user clicks a link or when an application requests to load an external resource (such as an image, script, or via AJAX calls), the browser typically includes the origin (URL) of the requesting page in the Referer header. While this can be beneficial for analytics or logging purposes, it might also inadvertently expose sensitive information, such as unique tokens from the URL.
For example, in a system that allows users to create and view reports, imagine a user adds an external link to a report. When another person previews this report and clicks on the external link, the web browser attaches the Referer
header that contains a unique token identifying the document. This inadvertently leaks information, potentially exposing the system to a “Broken Access Control” vulnerability.
Referer: https://example.com/reports/723eddd6-65e5-11ee-8c9a-0242ac120032
The Referrer-Policy
header dictates to the web browser how much information should be included in the Referer
header when navigating away from a page.
Value | Description |
---|---|
no-referrer | The referrer header will be omnited. Requests do not include any information |
no-referrer-when-downgrade | Sends the referrer (origin, path and query string) when security stays on the same level (or improves): HTTPS -> HTTPS, HTTP -> HTTP, HTTP -> HTTPS but do not send Referer information when security downgrades: HTTPS -> HTTP |
origin | Send only a origin in the referrer. For example: https://example.com/ |
origin-when-cross-origin | Send full Referer information (origin, path query string) when requesting the same origin at the same level of the protocol: HTTP -> HTTP, HTTPS -> HTTPS. In other cases send only origin (https://example.com ) |
same-origin | Send full Referer header for same-origin requests. In cross-origin calls do not send Referer header. |
strict-origin | Send the origin as referrer for same-origin and cross-origin requests but drops Referer when changing security level to less secure: HTTPS -> HTTP, |
strict-origin-when-cross-origin | Send full Referer header when performing a same-origin requests. For corss-origin request send the origin only when protocol stays on the same security level. In other cases do not send Referer header. |
unsafe-url | Always send the whole Referer header. |
By default, browsers use the no-referrer-when-downgrade
policy. However, the recommended options for a balance between security and providing useful information are typically same-origin
or strict-origin-when-cross-origin
.
Referrer-Policy: strict-origin-when-cross-origin
The header should be attached to the initial call of the application, but it’s a good practice to attach it to all responses like API calls, especially when your endpoint could be called outside your main application. You can configure your web server or load balancer to automatically sent the Referrer-Policy
.
TypeScript configuration
Enable all strict-type checking in tsconfig.json
this allows you to avoid many bugs that can lead to potential vulnerabilities.
{
"compilerOptions": {
"strict": true,
// other compiler options...
}
By setting the option “strict” to true, you enable a set of settings that enforce strict typing in the code.
alwaysStrict
It ensures that your files are parsed in ECMAScript strict mode and emits “use strict” for each source file.
"use strict";
x = 3.14; // This will cause an error because x is not declared
“Use strict” mode makes easier to write secure code by forcing:
- Variable declarations must be made using
let
,var
, orconst
. This ensures that variables are properly scoped and helps prevent unintentional modifications to global variables. - In strict mode, the value of
this
isundefined
when a function is called outside of an object context (e.g., not as a method of an object). This is helpful in preventing accidental modifications to the global object. - In strict mode, attempting to delete variables, function parameters, or functions will result in an error. This enforces a more predictable and less error-prone coding environment.
"use strict";
var myVar = "Hello";
delete myVar; // This will cause an error in strict mode
"use strict";
function myFunction() {
return "Hello";
}
delete myFunction; // This will cause an error in strict mode
"use strict";
function myFunction(param) {
delete param; // This will cause an error in strict mode
return param;
}
- In strict mode, duplicate parameter names in function declarations are not allowed. This helps prevent errors and ambiguities in function calls and definitions.
- Octal literals like
01234
are not allowed. - Assigning values to read-only properties will throw
TypeError
. With
statement is not allowed – due to unpredictable results and code optimization issues.
strictNullChecks
Let’s examine an example:
declare const loggedInUsername: string;
const users = [
{ name: "John", age: 22 },
{ name: "Andrew", age: 45 },
];
const loggedInUser = users.find((u) => u.name === loggedInUsername);
console.log(loggedInUser.age); // This will cause an error when strictNullChecks is enabled
The compiler will throw an error because you haven’t ensured that loggedInUser
is not null.
strictBindCallApply
The setting enforces strict type checking in bind
, call
, apply
methods on functions.
function fn(x: string) {
return parseInt(x);
}
const n1 = fn.call(undefined, "10");
const n2 = fn.call(undefined, false); // This will cause an error when strictBindCallApply is enabled
strictFunctionTypes
In Typescript strictFunctionTypes prevents the assignment of functions with less specific parameter types to variables or parameters with more specific expected types.
function fn(x: string) {
console.log("Hello, " + x.toLowerCase());
}
type StringOrNumberFunc = (ns: string | number) => void;
// Unsafe assignment is prevented by error Type '(x: string) => void' is not assignable to type 'StringOrNumberFunc'.
let func: StringOrNumberFunc = fn;
strictPropertyInitialization
Shows error when property is initialized but not set in constructor.
class UserAccount {
name: string;
accountType = "user";
email: string; //Property 'email' has no initializer and is not definitely assigned in the constructor.
address: string | undefined; //Property 'address' has assigned 'undefined' vale
constructor(name: string) {
this.name = name;
// Note that this.email is not set
}
}
noImplicitAny
The noImplicitAny option in TypeScript is used to prevent the compiler from inferring the any type for variables, parameters, and return types when it cannot determine a more specific type.
function process(data) { // This will cause a compilation error
console.log(data);
}
function process(data: string) { // No error with noImplicitAny
console.log(data);
}
noImplicitThis
The noImplicitThis option in TypeScript flags any usage of the this keyword that implicitly has the type any. When enabled, it requires that the type of this is explicitly defined in contexts where its type cannot be implicitly inferred by TypeScript.
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
greet() {
console.log(`Hello, my name is ${this.name}`);
}
delayedGreet() {
setTimeout(function() {
// Error with noImplicitThis: 'this' implicitly has type 'any' because it does not match any specific context
console.log(`Hello, my name is ${this.name}`);
}, 1000);
}
delayedGreetNoError() {
setTimeout(() => {
// No error with noImplicitThis, as 'this' is lexically bound to the instance of Person
console.log(`Hello, my name is ${this.name}`);
}, 1000);
}
}
useUnknownInCatchVariables
The option changes the default type of variables in catch clauses from any to unknown.
try {
// Some operation that may throw an error
throw new Error("An error occurred");
} catch (error) {
// With useUnknownInCatchVariables enabled, `error` is of type unknown
console.log(error.message); // Error: Object is of type 'unknown'.
}
To handle the error correctly you need to narrow the type of error
try {
// Some operation that may throw an error
throw new Error("An error occurred");
} catch (error) {
if (error instanceof Error) {
// Now TypeScript knows `error` is an instance of Error, so this is safe
console.log(error.message); // No error
} else {
// Handle cases where the caught value isn't an Error object
console.log("Caught an unexpected error type");
}
}
Environment variables
You should not include passwords, tokens, or any other sensitive data in your codebase. Typically, it’s also advisable not to store them in environment variables. This practice can be challenging for developers accustomed to working on backend applications. However, in backend applications, it is generally safe to store sensitive data in environment variables, as they are not exposed to the client-side. In React applications, as in any client-side application, all environmental data are injected into the generated code during the build process and delivered to the web browser, and should therefore be considered publicly exposed.
Despite this, there are some best practices to follow, such as keeping environment-related data in .env files. This approach becomes particularly important when managing different instances for various clients, especially those secured by internal VPNs. Instead of storing all client configurations in a common codebase, we should move all the client-specific data to client-specific environment variables. In React applications, as in any client-side application, all environmental data are injected into the generated code during the build process and delivered to the web browser, and should therefore be considered publicly exposed.
You should consider including only templates of .env files, or .env files for local development environments, in your codebase, and manage all client-specific data securely in other locations.
Routing security
Ensure your routes, restricted to logged-in users, are protected by implementing authentication and authorization checks in Higher-Order Components or wrappers.
Example of a simple wrapper:
import { useAuth } from "../hooks/useAuth";
import { Navigate } from "react-router-dom";
interface ProtectedRoutePropsWithChildren {
children: React.ReactNode;
}
export const ProtectedRoute = ({
children,
}: ProtectedRoutePropsWithChildren) => {
const { isAuthenticated } = useAuth();
if (!isAuthenticated) {
return <Navigate to="/login" replace />;
}
return <>{children}</>;
};
An example of protecting the dashboard page against direct access via the URL address:
<Router>
<Routes>
<Route path="/login" element={<LoginPage />} />
<Route
path="/"
element={
<ProtectedRoute>
<DashboardPage />
</ProtectedRoute>
}
/>
</Routes>
</Router>
State management
Avoid storing sensitive data
As a rule of thumb, avoid storing tokens, passwords, and any personal or confidential data in localStorage or cookies. The first reason is that all scripts running on the domain have access to this data, making it vulnerable to theft in the event of an XSS attack. Additionally, the data persist until explicitly removed. This means that if a user simply closes the web browser, you’re unable to remove the data upon session expiration and they are available in the web browser.
Use HttpOnly cookies
Store the session ID or token in a cookie set by the backend with the HttpOnly flag. This means the cookie is stored in the web browser and attached to requests, but JavaScript code cannot access it. In the event of a successful XSS attack, this will help protect your session ID or token from being stolen, as the cookie cannot be accessed by client-side scripts.
Example python backend code:
from fastapi import FastAPI, Response, status
app = FastAPI()
@app.post("/login")
async def login(response: Response):
user_authenticated = True # This should be replaced with actual authentication logic
if user_authenticated:
session_token = "your_secure_session_token_here"
# Set the HTTP-only cookie
response.set_cookie(key="session_token", value=session_token, httponly=True, samesite='Lax')
return {"message": "User logged in successfully"}
else:
return Response(status_code=status.HTTP_401_UNAUTHORIZED, content="Unauthorized")
After a successful login request, the backend’s response includes a Set-Cookie
header with the HttpOnly
attribute:
Set-cookie: session_token=your_secure_session_token_here; HttpOnly; SameSite=Lax
In a frontend application, you don’t need to worry about handling the token. You simply don’t have access to it. The token will be automatically attached to the requests. The biggest disadvantage (and an advantage at the same time) of this approach is losing control of the token in the frontend application. Renewal and removal of the token must be handled in the backend application. You cannot directly use any data stored in the JWT token; they must be sent in another way. But overall it’s the safest way to store auth data in your web application.
Use of enums
Using enums instead of strings reduces the likelihood of making mistakes:
- They enforce the use of predefined values,
- Document all possible values, helping to avoid dealing with “magic” strings or integers, for which understanding requires checking the entire codebase.
- Prevent misconfiguration due to typos.
enum ProcessingStatus {
IDLE = "idle",
LOADING = "loading",
SUCCEEDED = "succeeded",
FAILED = "failed",
}
function isProcessFinished(process: Process){
return process.status === ProcessingStatus.SUCCEEDED || process.status === ProcessingStatus.FAILED;
}
Use of interfaces
When your component receives parameters, it’s crucial to declare a clear interface that describes the data you are expecting. Let’s examine an example of code lacking a clear interface:
const InsecureButton = (props) => {
//Don't do it at work
const { children, ...rest } = props;
return <button {...rest}>{children}</button>;
};
Unfortunately, this style of coding is quite common, especially in modern and flexible languages. When you want to use or modify the InsecureButton
, you must examine the code within this component (which can be far more complex than in this example). Then, you need to find instances of InsecureButton
usage and look at the <button>
element’s definition to understand what types of properties are being passed.
This approach leads to type ambiguity, runtime errors, and a heightened risk of errors that can make the code vulnerable to XSS attacks. Let’s have an example of a better approach that makes the code way easier to understand and maintain:
interface ButtonProps {
onClick: () => void;
children: React.ReactNode;
className?: string;
}
const SecureButton: React.FC<ButtonProps> = ({ onClick, children, className }) => (
<button onClick={onClick} className={className}>
{children}
</button>
);
SDLC Good Practices
Although the Software Development Life Cycle (SDLC) is not related to React or TypeScript specifically and it’s a wide topic for separate article, it is worth mentioning because these practices are crucial for creating secure software.
Well documented software
One of the higher risks in a project is having work performed by team members who do not understand what they are doing or lack perspective in the application area they are working on. Although everything may seem simple to you, bear in mind that in the future, there will be a developer, QA engineer, or product owner who is new to the project and will have to learn everything. Ensure that the code is well documented at all levels:
- Self-documented code.
- Comments (for aspects that cannot be clearly expressed by code alone).
- Technical documentation (wider technical perspective).
- Business documentation.
For many product owners, the process of describing business scenarios, use cases, and explaining features in detail may seem like a waste of time. They prefer to commence implementation right after a conversation, based on tasks described in a rudimentary manner. However, there arises a requirement for a user manual, and those descriptions must be created anyway. Starting with well-described tasks and clear understanding of the application areas we are working on provides a document that can be utilized not only by developers. It aids in preparing user manuals/help, introducing new team members, creating test cases, etc. This is a job that has to be done anyway, and it will not become easier after implementation.
Code review
Don’t leave your coworkers to grapple with the code alone. Code reviews align the quality of your software with the highest level of collective knowledge among the developers. This process takes time, so if necessary, incorporate it into your SDLC or management systems by creating separate tasks for it. Invest some effort: download the code, try using it, debug it a bit, and think from a broader perspective, rather than focusing solely on the most obvious issues like basic code style.
For each checking item consider:
- Correctness and functionality
- Error handling
- Security Considerations
- Performance
- HTTP Requests and external systems interactions
- Code style
- Documentation
- Testing (automated or how to test it manually)
- Resurces
- Deployment
- etc
For instance, when dealing with an HTTP request, consider the following:
- What will happen if we receive an error?
- What if there is a timeout or the request takes a very long time to process?
- Can the GET parameter be too long?
- Do we need to prepare firewall rules to allow the request?
- Where should requests be made in different environments (local development, test, staging, production)?
- etc
Decide on the number of people required for a code review and enforce merge restrictions based on the number of approvals in your code repository. Make code reviews mandatory, but allow for exceptions only when absolutely necessary.
Testing
Ensure that at the code level, both unit and integration tests are conducted on the code that needs testing, not just on the code that is easiest to test. It’s important that acceptance criteria are established before the code is delivered, and that the QA team prepares test scenarios. With these scenarios in place, the most time-consuming testing tasks can be automated. A task is not considered done merely by integrating the code; it is only complete when the integrated code is verified by appropriate tests. Make it a practice to conduct regression tests on at least the areas that are directly affected by the modified code. Ideally, full automated regression tests should be performed after every code change.
I encourage you to check out my other post on this topic: Quality assurance (QA) roles in IT Agile projects
Summary
In conclusion, building secure React TypeScript applications requires a multifaceted approach that encompasses understanding and mitigating common vulnerabilities, adhering to best practices in coding and architecture, and fostering a culture of security awareness within the development team. By incorporating the strategies and considerations discussed, developers can significantly enhance the security posture of their applications. Remember, security is not a one-time effort but a continuous process that evolves with your application and the landscape of threats. Stay informed, stay vigilant, and make security a foundational element of your development process