Building a ReactJS App with GraphQL Middleware and NodeJS-MongoDB Backend

Carlos Cuba
28 min readMay 13, 2024

--

Sometimes I miss the times of old. Even though it was 2006, the go-to solution for what would be considered a reliable website only involved HTML, CSS, and sometimes the occasional JS, exactly as I learned it in high school that year. Maybe so because the usage of internet services wasn’t as accessible as now, nor did technology yet catch up with it to leverage a website’s advantages that we now take for granted in an inherently, always connected world at the reach of a metal box in your pocket.

But as such, technologies to host websites improve and therefore web apps change. The days of FTP and working without version control are long gone, and now it’s more important than ever to ensure a web application follows the latest coding practices and facilitates development work with more than just vanilla Javascript. Yes, even though things have become more complex, this world of emerging technologies offer us several ways to leverage technology to produce better software! And yet, frameworks such as ReactJS and AngularJS, while powerful to work in a development environment, rely on non-static code to maintain complex states/data changes within an application. For this, good ol’ dinosaurs like Filezilla are rendered pretty much obsolete!

Even so, such static websites are still present, through free services like Github Pages, where a build of a project can be hosted and accessed via a browser, so long as it doesn’t involve a complex software development lifecycle (namely, communication with other applications that will guarantee its state/data to change). The existence of these free services allows for hobbyists and developers to host their own content online, even if only in the basic technology of yesteryear. HTML, CSS and JS files can still run properly, but such advantages, given the previously mentioned frameworks, are limited. Of course, leading cloud hosting services such as AWS, Azure and Heroku provide some form of cheap hosting experiences. But if one is not careful with the setup and configuration of such services, even an idle container that is meant to host applications which may or may not have traffic will eventually become an income mini-vacuum with any incurring hosting fees.

Thankfully there are also services online which offer some form of hosting that, when appropriately used, can scale up into a robust application that proves to be useful development work for hobbyists, rookies, and maybe even small businesses. While it lasts (and hopefully it does), services such as Vercel allow any user to host several serverless applications, which can prove useful to prepare and deploy applications with ease and have them work without worrying about limited complexity.

This tutorial is meant to provide a more complete experience into the software development lifecycle. Namely, what would be the basics of a distributed system: Using a ReactJS frontend app that connects to a GraphQL middleware, which in turn will connect to a NodeJS express app as the backend or a service that connects to a MongoDB database, all hosted for free.

The setup

First you’re gonna need a github account at https://github.com/signup :

With your account set up, you should be able to create at least one repository at https://github.com/new:

Note that this tutorial will use ReactJS and NodeJS in case you want to create a .gitignore for the project. In any case, I’ll add one here for each app later! For this tutorial, I made a repository titled like this:

With that, it’s time for the fun part!

Boilerplate Setup

First off, you can either clone the repo you created or in a folder of the same name as the repo follow the steps from the repo:

Either push that first commit, or proceed. Next, you’ll want to open a terminal and run npm init. This will create a package.json file. It will look like this for now:

{
"name": "react-graphql-node-mongo-boilerplate",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC"
}

Why is this important? For this boilerplate, I’m focusing on putting all applications together, and this package.json will come in handy to control them all. But there should be no limitation to place each part of this (frontend, middleware, backend) into separate repositories.

In any case, the project structure would be good enough if it looks like this:

Go ahead and push the code into your repository.

The Frontend

Execute the following command:

npx create-react-app frontend --template typescript

This will create the ReactJS app that hosts the code for the frontend. Once done, the project structure will look like this:

and if you run npm start, the app will look like this in the browser:

Boilerplate/Frontend integration

Up next, for the boilerplate we’re gonna ensure we can run the frontend app when called outside of its folder. Change the main package.json to look like this:

{
"name": "react-graphql-node-mongo-boilerplate",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start-fe": "npm run --prefix ./frontend start"
},
"author": "",
"license": "ISC"
}

Note: Not that testing isn’t important, but for the purposes of keeping this tutorial a bit briefer I’m not including it.

In any case, note that we now have a script, start-fe. Using the — prefix flag, we’re able to access commands from an inner directory. This would allow us to run the frontend app from a centralized location. With this I mean we should be able to eventually run the middleware and the backend in the same place, at the same time.

Frontend Vercel Integration

Now that we have at least one app, we can begin deploying to ensure we take advantage of Vercel as a free tool for hosting. For this, you’ll need to create an account in Vercel at https://vercel.com/signup .

Once done, you should be able to create a new project:

You should then be able to import a git repository:

Upon selecting the repo we’re working on, you should be able to give this project a name, select its framework, and its root directory. This is really useful as you’d be allowed to have all your apps in a single repo, a monorepo if you will:

Once done, click deploy. It should let you see a screen similar to this:

If you click on Visit, you should be able to view your ReatcJS app deployed online, for free!

The beauty of linking the repo and deploying it on Vercel involves the ease with which you can now publish your code. Every time your code gets pushed, the Vercel project will recognize there is a change and prepare a new deployment.

The Middleware

We’re now going to create another application. The middleware will serve as a middleman, a sentinel of sorts which is able to route the requested communication between the frontend and the services in the backend (even if for this tutorial we’re only implementing one).

At the root level, create a folder called middleware. Inside, run npm init -y. This will create a basic project which we’re gonna fit to run with both GraphQL and Vercel.

Inside the middleware folder, create one called api. Inside, make a file called server.js. This is where the main setup for GraphQL and Vercel will happen. Add the following code:

const express = require('express');
const { ApolloServer } = require('apollo-server-express');
const { createServer } = require('http');
const cors = require('cors');

const isDev = process.env.MIDDLEWARE_ENV === 'dev';

const typeDefs = gql`
type Query {
hello: String
}
`;

const resolvers = {
Query: {
hello: () => 'Hello world!',
},
};

const server = new ApolloServer({
typeDefs,
resolvers,
introspection: isDev,
playground: isDev
});
const app = express();
app.use(cors());

async function startServer() {
await server.start();
server.applyMiddleware({ app });

// Only listen on HTTP port in local development, not when deployed on Vercel
if (!process.env.VERCEL) {
const PORT = process.env.PORT || 4000;
app.listen(PORT, () => console.log(`💫 Server ready at http://localhost:${PORT}/graphql`));
}
}

startServer();
const requestHandler = app;
const vercelServer = createServer((req, res) => requestHandler(req, res));
module.exports = vercelServer;

For now, we’re using very simple implementations of resolvers (functions that resolve the structure of the data that comes back from a service call in the backend) and typeDefs (specifies the schema of the calls to backend services). We allow ourselves to execute the introspection query and see the playground only on a development environment, as doing so in a deployed instance it’ll be much less secure if even the data schema is exposed.

Note that here we use process.env.MIDDLEWARE_ENV to determine if we’re in a development environment. This means that we need to be able to specify which environment we want to use upon running. Since we want to have the playground available on dev, we’ll want to include MIDDLEWARE_ENV as an environment variable for use upon running npm start. One simple way to achieve this in NodeJS is to create a .env file inside the middleware folder. It should look like this:

MIDDLEWARE_ENV=dev

And now, we need to be able to use this variable, as just because we add an .env file the app doesn’t automatically know to use it. As such, we can install the dotenv package. While we’re at it, let’s install the packages used in server.js! Run the following:

npm i @vercel/node apollo-server-express dotenv express graphql

Why @vercel/node? Because we’ll implement this app to the vercel deployment, since somehow Vercel doesn’t include a NodeJS framework preset :( For this, create a vercel.json file inside the middleware folder:

{
"version": 2,
"routes": [
{
"src": "/graphql",
"dest": "/api/server.js"
},
{
"src": "/(.*)",
"dest": "/api/server.js"
}
]
}

Here, we indicate that any route will have to go through the server.js and that GraphQL calls from the frontend (the /graphql route) will end up in that same place

Now, to be able to run this locally, we modify our package.json to include a start command:

"start": "node -r dotenv/config api/server.js"

After running npm start, you should be able to go to http://localhost:4000/graphql and there you’d see this:

Click on the Query your server button to go to the studio playground. Once there, you should see something like this:

Note that the root types show the type definitions created, and the resolver query functions that you have created. If you have too many, there’s always that magnifying glass to search them!

But now that we get it up and running, should we push to the repo? Remember that we now have a .env file, and should this need to hold sensitive info (urls, keys, etc.) it’ll be a security hazard for your repo and any apps that depend on it. So now we want to ensure that we can ignore pushing this specific file, and for that it’s time to add one .gitignore file inside the middleware folder, with the following:

# Dependency directories
node_modules/

# Environment variables
.env

# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov

# Coverage directory used by tools like istanbul
coverage
*.lcov

# nyc test coverage
.nyc_output

# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt

# Bower dependency directory (https://bower.io/)
bower_components

# node-waf configuration
.lock-wscript

# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release

# Dependency directories
jspm_packages/

# TypeScript v1 declaration files
typings/

# Optional npm cache directory
.npm

# Optional eslint cache
.eslintcache

# Microbundle build outputs
dist/
build/

# Optional REPL history
.node_repl_history

# Output of 'npm pack'
*.tgz

# Yarn Integrity file
.yarn-integrity

# dotenv environment variable files
.env.test

# parcel-bundler cache (https://parceljs.org/)
.cache
.vercel

Though we only really need to add node_modules and .env, this is a comprehensive file which you can grab from repos like this one https://github.com/github/gitignore which hold collections of gitignore files to use for many types of projects.

But now, what happens if you complete a project like this and don’t get to work with it for some time? If your code remains this way, it’s possible that you won’t remember which env vars are being used. You could always look them up in your code editor, but that’s work you don’t really need. What you can do to hint your future self, or anyone who wishes to fork your repository is create a .env.sample file which will only hold the env var names:

MIDDLEWARE_ENV=

Of course, it’s your responsibility to ensure no sensitive info is ever stored here. At the end of this middleware setup, the middleware folder structure would look like this:

Boilerplate/Middleware Integration

Continuing with the boilerplate process, we want to be able to run the middleware app from outside it. To do this, we’ll need the concurrently package. Outside of the middleware folder, run:

npm i concurrently -D

This will install the package as a development dependency since we need it to run more than one app at once. Next, we’ll need to add two more scripts:

"start:mw": "npm run --prefix ./middleware start",
"start": "concurrently \"npm run start:fe\" \"npm run start:mw\""

With these, if we want to only run the middleware outside the project, we run start:mw. The new start command will use the newly installed concurrently package to run both the start:fe and start:mw commands together.

But now that we have one package, we’ll have a package-lock.json file and a node_modules folder. We can use the file, but not the folder. Add this simple .gitignore:

/node_modules

We then ensure we don’t push any install stuff into the repo. The boilerplate structure would now look like this:

Middleware Vercel Integration

Now we should be able to create another project in Vercel. Choose the same repo and give it a name, but under Root Directory choose the middleware folder. You should now have two projects deployed!

The Database

Now we’re gonna create a MongoDB account to host our free NoSQL database. To start, go to https://account.mongodb.com/account/register and fill out the form.

Once done, you wanna follow these MongoDB simple tutorials:

However, as you create a cluster, you wanna make sure you select the M0 shared cluster deployment:

This ensures that your database will be hosted in a free space. Of course, if you anticipate you’ll have a considerable traffic or users which imply a lot of storage would be needed, you’d be better off with the other deployment templates. Finally, click on Create Deployment.

Once that’s done, head over to the Database option at the left menu, then click on the Browse Collections button:

Once in, you can add a sample dataset to start working, or add your own data:

Fill out the Database name and the collection name and then click Create.

Next, find the button to insert document, and click it. On the modal,

Add this data:

[
{"userId": 1, "name": "Andrew"},
{"userId": 2, "name": "Bob"},
{"userId": 3, "name": "Charles"},
{"userId": 4, "name": "Damian"}
]

Of course, the database accounts, the collection users, and the data is all for test purposes. For your project, you’re free to use the data you see fit. Once this is done, click on Database Access on the left menu:

This is so you’re able to Add a New Database User, which is the user that will handle the access to the dabatase from the backend. Click on the button, and fill out the user name and password, then scroll down to select a role. Read and write should suffice:

Finally, click on Add User.

The Backend

Before we move back to the code, in MongoDB go to Database from the left menu and click on connect:

Next click on Drivers and then under Driver select Node.js. Latest version should be fine:

As mentioned, we’ll need the mongodb package for the backend. But also make a note of the connection string, which follows this format:

mongodb+srv://<db-user>:<db-password>@<cluster-url>?retryWrites=true&w=majority&appName=<app-name>

With that in mind, let’s go back to the repo. At the root level, make a folder called backend and inside run npm init -y. It would create a simple NodeJS app. Also inside it, create a file index.js, with the following:

const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const testRouter = require('./routes/testRoutes');
const connectToDb = require('./configs/db.config');
const app = express();

app.use(express.urlencoded({ extended: true }));
app.use(bodyParser.json());
app.use(cors({ credentials: true, origin: '*' }));

app.get('/', (req, res) => {
res.send('Hello from the NodeJS backend!');
});

app.use('/test', testRouter.getUsers);

async function startServer() {
const port = process.env.PORT || 5000;
app.listen({ port }, () =>
console.log(`✨ Server ready at http://localhost:${port}`)
);
connectToDb();
}

startServer();

express will be used for us to be able to handle requests and route them to specific functions within a backend service enabling and leveraging RESTful capabilities into our app to interact for example with databases to perform transactions, which basically completes the full cycle of a web application. The body-parser ensures that, when hooked into the app, if we send any data as JSON to a request route, that this is properly understood. cors is used to allow cross-origin requests from any domain, whether bare or with credentials. testRouter and connectToDb are imports from code I’ll show later.

But in short, what the index.js does is stablish a connection in a default port 5000, stablish a database connection, and handle two separate requests: one to / to show a simple hello-world style text, and another to /test where we’ll get our users from the database. In any case, to start seeing this in action, run the following command:

npm i body-parser cors dotenv express mongodb

Next, modify the scripts to use this one:

"start": "node -r dotenv/config index.js"

This should allow for the program to run. But since we’re using dotenv, and the index.js does use process.env, we need to set up environment variables. Create a .env.sample file that looks like this:

PORT=
DB_USER=
DB_PASSWORD=
DB_CLUSTER_URL=
DB_APP_NAME=
DB_NAME=
DB_COLLECTION=

Create your own .env with these and fill out the variables. You should have all these variables from the connection string, except the port which could be 5000 or any other so long as it doesn’t repeat from the ones used for the middleware or the frontend. Now, add the .gitignore. It can be similar to the one used in the middleware so long as it mentions node_modules and the .env.

If you want to see this in action, try commenting out the index.js lines that mention the testRouter, connectToDb (the function at the bottom as well), and the /test route. Once you run npm start, you should see in your browser something like this:

So at least the text request works! Now we’ll try to get the db request going. If you tested the code like above, uncomment the code in index.js and create two folders:

  1. Folder configs: Make a file inside called db.config.js with the following:
const { MongoClient } = require('mongodb');
const uri = `mongodb+srv://${process.env.DB_USER}:` +
`${process.env.DB_PASSWORD}@${process.env.DB_CLUSTER_URL}/` +
`?retryWrites=true&w=majority&appName=${process.env.DB_APP_NAME}`;
const client = new MongoClient(uri);

let database;

const connectToDb = async () => {
if (database) {
return database;
}
try {
await client.connect();
console.log('📘 Connected to MongoDB');
database = client.db(process.env.DB_NAME);
return database;
} catch (error) {
console.log(error);
}
};

module.exports = connectToDb;

So now, whenever you call connectToDb (like at the index file), we’ll stablish a connection to our MongoDb database. The if(database) block ensures that even though we call connectToDb, if a connection has already started we can use it instead of reconnecting.

2. Folder routes: Inside make a file called testRoutes.js which will hold any call related to users. Write the following:

const connectToDb = require('../configs/db.config');

const getUsers = async (req, res) => {
try {
let db = await connectToDb();
const collection = db.collection(process.env.DB_COLLECTION);
const data = await collection.find({}).toArray();
res.send({
message: 'And the Backend too!',
data
});
} catch (error) {
console.error('Error accessing the database', error);
res.status(500).send('Failed to fetch data');
}
};

module.exports = {
getUsers,
};

As mentioned from the previous file, connectToDb can easily be reused here without risking a reconnection. But now, for this file, whenever we call getUsers (like in the /test route in index.js) we grab a collection as defined from the env vars, convert it to an array and send the result. For simplicity and to keep this tutorial simple, I’ll only mention how to fetch all users. However, all other CRUD operations can be easily implemented with these, plus a few more to get a feel for how to handle and query data:

const collection = db.collection('users');

// get all
await collection.find().toArray();

// get one (for example, by name
await collection.findOne({ name: 'Charles' });

// create one
await collection.insertOne({ userId: '5', name: 'Eric' });

// create many
await collection.insertMany([
{ userId: 5, name: 'Ethan' },
{ userId: 6, name: 'Felix' }
]);

// update one
await collection.updateOne({ userId: '5' }, { $set: { name: 'Ethan' } });

// delete one
await collection.deleteOne({ userId: '5' });

// get all whose userId is less than 3
await collection.find({ userId: { $lt: 3 } }).toArray();

// append '*' to user names whose id is in a list [2,3]
await collection.updateMany(
{ userId: { $in: [2, 3] } },
{ $set: { name: { $concat: ['$name', '*'] } } }
);

// delete all whose userId is an odd number
await collection.deleteMany({ userId: { $mod: [2, 1] } });

To learn more of these, a very useful operand list can be found here https://www.mongodb.com/docs/manual/reference/operator/query/ it should help you manipulate data for the db as you need. In any case, with the code for the three files in, we can now query the MongoDB locally. Run the backend again, and once in the browser, head to /test. It should look like this:

nothing wrong with the first record, I just added it with the props in the wrong order that time 😅

Since we need to integrate with vercel, go ahead and add a vercel.json inside the backend folder:

{
"version": 2,
"builds": [
{
"src": "./index.js",
"use": "@vercel/node"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "/"
}
]
}

Finally, we’re gonna go to the root package.json, to add a new script and update the start one:

"start:be": "npm run --prefix ./backend start"

"start": "concurrently \"npm run start:fe\" \"npm run start:mw\" \"npm run start:be\""

This will allow the boilerplate to run the three apps: the frontend, the middleware and the backend together. With this, the backend app is complete for the purposes of this boilerplate. The folder structure for it should look like this:

Backend Vercel Integration

After pushing your code, create another project in Vercel, choose the boilerplate repo and give it a name, but under Root Directory choose the backend folder. Note that the project won’t work because we’re using env vars. Vercel gives you the option to add environment variables to a project, but you must redeploy it once done so these are taken into consideration:

You’ll want to add all the env vars we’re using for the MongoDB database, then redeploy the project.

But we’re not yet done here! If you simply head over to the /test page, you might see this in the page:

This gateway timeout error simply means that the IP address of the backend deployment isn’t authorized to access the database. To fix, add an integration for MongoDBAtlas into your project, in this case, the backend project here https://vercel.com/integrations/mongodbatlas . The process is relatively straight forward and you should be able to choose your cluster.

What this ultimately does is create a user you can find under Database Access in the MongoDB left menu, which handles the requests from the backend by whitelisting its IP.

With this, you should be able to see the same behavior in the backend’s deployed url’s test page as in http://localhost:5000/test

Connect Local Middleware to Local Backend

Since the backend already sends data after a request ends, we need to make sure we make a call to that specific route from our middleware. Inside the middleware folder, create a src folder which should keep a graphql folder, and this one should have two folders: data-sources and resolvers

The resolvers folder will hold all the resolvers. These are functions which are able to modify the data that comes from a backend/service call to prepare to send it back to the frontend. The data-sources folder would hold all the data sources. These are functions which perform external calls (such as to the backend) with the data received from a resolver, sending it right back or a fallback in case of a failure.

The resolvers folder will have a single folder, test which will have a file test.js with:

const { gql } = require('apollo-server-express');
const TestDataSource = require('../../data-sources/test/test.js');

const testTypeDefs = gql`
type UserData {
name: String
userId: String
}
type Response {
message: String
data: [UserData]
}
type Query {
test: Response
}
`;

const buildTestResponse = (response) => {
return {
message: `FE integrated successfully with Middleware! ${response.message}`,
data: response.data
};
}

const testResolvers = {
Query: {
test: async () => {
const response = await TestDataSource.test();
return buildTestResponse(response);
},
},
};

module.exports = { testTypeDefs, testResolvers };

Here, the testResolvers are defined with all functions that will serve as resolvers . Note the simplicity in structure as the call to the data source function is made, then the data is arranged and sent back to the frontend using the buildTestResponse. Aside from them, the testTypeDefs will hold all queries to use (which is only one for this project) and define the type of the response, ensuring we receive the appropriate info. This would allow us to refer to what the different queries or mutations across the middleware look like in the playground. Thus ensuring that we prepare the query calls in the front end with the correct arguments.

The data-sources folder will have a folder test which will have a file test.js:

const axios = require('axios');

class TestDataSource {
static async test() {
try {
const response = await axios.get(`${process.env.BACKEND_URL}/test`);
return response.data;
} catch (error) {
console.error('Failed to fetch data from backend:', error);
return 'Error fetching data';
}
}
}

module.exports = TestDataSource;

This data source function will be a simple function which will depend on axios to make a call to the backend, and simply return the data, letting the resolver handle it as needed. So we’ll need to run npm i axios to add the axios package to the middleware app.

Up next, update the api/server.js file to look like this. Basically we’re replacing the dummy typedef and resolver for the ones created:

const express = require('express');
const { ApolloServer } = require('apollo-server-express');
const { createServer } = require('http');
const cors = require('cors');
const { testTypeDefs, testResolvers } = require('../src/graphql/resolvers/test/test.js');

const isDev = process.env.MIDDLEWARE_ENV === 'dev';

const server = new ApolloServer({
typeDefs: [testTypeDefs],
resolvers: [testResolvers],
introspection: isDev,
playground: isDev
});

const app = express();
app.use(cors());

async function startServer() {
await server.start();
server.applyMiddleware({ app });

// Only listen on HTTP port in local development, not when deployed on Vercel
if (!process.env.VERCEL) {
const PORT = process.env.PORT || 4000;
app.listen(PORT, () => console.log(`💫 Server ready at http://localhost:${PORT}/graphql`));
}
}

startServer();

const requestHandler = app;
const vercelServer = createServer((req, res) => requestHandler(req, res));

module.exports = vercelServer;

Update the .env.sample file to include the MIDDLEWARE_ENV variable. In the .env file, this should be http://localhost:5000 as we want to access the local backend from the local middleware. Push your code.

Connect Deployed Middleware to Deployed Backend

Although the middleware Vercel app should now have all the GraphQL relevant code, it won’t connect to the deployed instance of the backend because we’re missing the MIDDLEWARE_ENV variable. Under the middleware project’s settings, add that env variable using the deployed backend url. Any calls from the middleware to /test should now be able to return the data the backend gets from that route.

With this, all that’s needed to finish is modify the frontend to connect to the middleware!

Connect Local Frontend to Local Middleware

Beforehand, to be able to write all our code and perform a successful connection, we’ll need to add a few packages:

npm i @apollo/client dotenv graphql react-router-dom
npm i -D @babel/plugin-proposal-private-property-in-object @types/react-router-dom @graphql-codegen/cli @graphql-codegen/typescript @graphql-codegen/typescript-operations @graphql-codegen/typescript-react-apollo

Inside the frontend folder, find the index.tsx file inside the src folder, and modify the App wrapper so that it introduces the ApolloProvider which will use an ApolloClient to connect the frontend to the middleware. The file should look like this:

import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
import reportWebVitals from './reportWebVitals';
import { ApolloProvider } from '@apollo/client';
import client from './apolloClient';

const root = ReactDOM.createRoot(
document.getElementById('root') as HTMLElement
);
root.render(
<ApolloProvider client={client}>
<App />
</ApolloProvider>
);

// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
reportWebVitals();

So we’ll need to create an apolloClient.ts file at the same level, which will look like this:

import { ApolloClient, InMemoryCache } from '@apollo/client';

const client = new ApolloClient({
uri: process.env.REACT_APP_GRAPHQL_ENDPOINT,
cache: new InMemoryCache()
});

export default client;

This is a very simple implementation that specifies memory cache for faster reading and writing of data, which uses a uri specified by an env var. So you’ll need to create a .env.sample right inside the frontend folder:

REACT_APP_GRAPHQL_ENDPOINT=

Then make a .env to match. Locally you’ll want this to have the value http://localhost:4000/graphql as that’s the local address of the graphql call to be able to call queries from the frontend. For Vercel, you’d want to go to your frontend app’s settings and add this value, ensuring it uses the middleware app’s deployment url (typically the same as displayed when clicking on the link to visit it, appending /graphql):

Now, modify the App.tsx file to look like this:

import React from 'react';
import logo from './logo.svg';
import './App.css';
import { Route, BrowserRouter as Router, Switch } from 'react-router-dom';
import TestPage from './pages/TestPage';

function App() {
return (
<Router>
<div>
<Switch>
<Route path="/test">
<TestPage />
</Route>
{/* Add more routes as needed */}
<Route path="/">
{/* TODO: remove this sample code for when developing your app */}
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p>
Edit <code>src/App.tsx</code> and save to reload.
</p>
<a
className="App-link"
href="https://reactjs.org"
target="_blank"
rel="noopener noreferrer"
>
Learn React
</a>
</header>
</div>
</Route>
</Switch>
</div>
</Router>
);
}

export default App;

I’ve left a TODO so that it’s clear what to remove, but for the purposes of only testing the connection I’m adding a Route which directs to /test, in which there will be a query to call the GraphQL query which will call the backend and get the users. This would call a component TestPage, which is actually a function component organized into a pages folder. So let’s create that. Create a pages folder inside the src folder, then a folder called TestPage with an index.tsx file inside with the following:

import { useQuery } from '@apollo/client';
import React from 'react';
import { GET_TEST_DATA } from '../../graphql/queries/test';
import { TestQuery } from '../../generated/graphql';

const TestPage: React.FC = () => {
const { data, error, loading } = useQuery<TestQuery>(GET_TEST_DATA);

if (loading) {
return <p>now loading...</p>;
}

return error ? (
<p>{`There was an error: ${JSON.stringify(error)}`}</p>
) : (
<div>
<h2>From GraphQL</h2>
<p>{`Response: ${data?.test?.message}`}</p>
{data?.test?.data?.length && (
<ul>
{data?.test?.data.map((currUserData, i) => {
const { name, userId } = currUserData!;

return (
<li key={i}>
{`${userId}: ${name}`}
</li>
)
})}
</ul>
)}
</div>
)
}
export default TestPage;

Here, we’re using @apollo/client to allow us to leverage the useQuery hook to call queries from the middleware to start the backend-db communication through it. From the query we’re destructuring three props: data, error, and loading. As appropriately named, we can use this as follows:

  • data will be initially null but contain the info returned from the query call, if successful
  • error is a boolean that is initially false but would become true if there was an error when calling the query.
  • loading is another boolean which is initially false but changes to true as the query is executed, then turns to false when done, error or not.

These three allow for abundant flexibility with what to display in a page depending on its state, like showing an animation while the data loads, some error message with an image if there is a failure, and the data if the query call is successful. But to avoid making this article any longer, I’m only displaying text in each case: An if block while data loads, some text if there is an error, and a list if the data load is successful.

But back to the useQuery line: Note that the variable inside the <> brackets indicates the response type of the fetched data, and the variable inside the parenthesis specifies the query that we’ll call. So we’ll need two files from where we can import GET_TEST_DATA and TestQuery.

GET_TEST_DATA is simple; it’s just a query to match the query declared in the middleware. Create a folder called graphql inside src, then a folder queries inside that, then a file test.ts:

import { gql } from "@apollo/client";

export const GET_TEST_DATA = gql`
query GetTestData {
test {
message
data {
name
userId
}
}
}
`;

note that what’s inside the test block specifies which values to send back in the response payload. That way we only get what we need and optimize the fetch process.

The import for TestQuery is a little bit more complicated. As the import suggests, we intend to have the response types be generated by recognizing the types from the GraphQL middleware then importing them into the frontend to put them into a file where we can use them as types and be sure we know what we’re getting back, no guesswork. For this, create a file graphql-types.js inside the frontend folder, with this:

require('dotenv').config();

const { generate } = require('@graphql-codegen/cli');

async function generateTypes() {
const options = {
overwrite: true,
schema: process.env.REACT_APP_GRAPHQL_ENDPOINT,
documents: 'src/graphql/queries/*',
generates: {
'src/generated/graphql.tsx': {
plugins: [
'typescript',
'typescript-operations',
'typescript-react-apollo'
],
config: {
withHooks: true,
withHOC: false,
withComponent: false,
}
}
}
};

try {
await generate(options, true); // true here means to print generation logs
} catch (error) {
console.error('Error during code generation', error);
}
}

generateTypes();

Basically as soon as this file is executed, the generateTypes function is called, which in turn calls the generate function from the @graphql-codegen/cli package, with options that look over the schema of queries (and mutations if any) over the GraphQL url provided (our middleware GraphQL url), indicating the directory of documents to be used to try to generate types. Since the queries are at src/graphql/queries we choose to grab everything inside. Lastly, the generates option mentions which file we’re going to create from this generation and where, along with plugins and configuration to match our setup. Does the src/generated/graphql.tsx address seem familiar? That’s the generated file which we’re importing in the page!

One last code change we need to complete obtaining and generating the query response types will be modifying the package.json scripts to call the graphql-types.js file, for which we add this script:

"generate": "node graphql-types.js",

Ensure that the frontend is not running and that the middleware is, and run this script with npm run generate to get the types. Once done, you should be able to see the frontend/src/generated/graphql.tsx and it should look like this:

import { gql } from '@apollo/client';
import * as Apollo from '@apollo/client';
export type Maybe<T> = T | null;
export type InputMaybe<T> = Maybe<T>;
export type Exact<T extends { [key: string]: unknown }> = { [K in keyof T]: T[K] };
export type MakeOptional<T, K extends keyof T> = Omit<T, K> & { [SubKey in K]?: Maybe<T[SubKey]> };
export type MakeMaybe<T, K extends keyof T> = Omit<T, K> & { [SubKey in K]: Maybe<T[SubKey]> };
export type MakeEmpty<T extends { [key: string]: unknown }, K extends keyof T> = { [_ in K]?: never };
export type Incremental<T> = T | { [P in keyof T]?: P extends ' $fragmentName' | '__typename' ? T[P] : never };
const defaultOptions = {} as const;
/** All built-in and custom scalars, mapped to their actual values */
export type Scalars = {
ID: { input: string; output: string; }
String: { input: string; output: string; }
Boolean: { input: boolean; output: boolean; }
Int: { input: number; output: number; }
Float: { input: number; output: number; }
};

export type Query = {
__typename?: 'Query';
test?: Maybe<Response>;
};

export type Response = {
__typename?: 'Response';
data?: Maybe<Array<Maybe<UserData>>>;
message?: Maybe<Scalars['String']['output']>;
};

export type UserData = {
__typename?: 'UserData';
name?: Maybe<Scalars['String']['output']>;
userId?: Maybe<Scalars['String']['output']>;
};

export type TestQueryVariables = Exact<{ [key: string]: never; }>;


export type TestQuery = { __typename?: 'Query', test?: { __typename?: 'Response', message?: string | null, data?: Array<{ __typename?: 'UserData', name?: string | null, userId?: string | null } | null> | null } | null };


export const TestDocument = gql`
query Test {
test {
message
data {
name
userId
}
}
}
`;

/**
* __useTestQuery__
*
* To run a query within a React component, call `useTestQuery` and pass it any options that fit your needs.
* When your component renders, `useTestQuery` returns an object from Apollo Client that contains loading, error, and data properties
* you can use to render your UI.
*
* @param baseOptions options that will be passed into the query, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options;
*
* @example
* const { data, loading, error } = useTestQuery({
* variables: {
* },
* });
*/
export function useTestQuery(baseOptions?: Apollo.QueryHookOptions<TestQuery, TestQueryVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useQuery<TestQuery, TestQueryVariables>(TestDocument, options);
}
export function useTestLazyQuery(baseOptions?: Apollo.LazyQueryHookOptions<TestQuery, TestQueryVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useLazyQuery<TestQuery, TestQueryVariables>(TestDocument, options);
}
export function useTestSuspenseQuery(baseOptions?: Apollo.SuspenseQueryHookOptions<TestQuery, TestQueryVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useSuspenseQuery<TestQuery, TestQueryVariables>(TestDocument, options);
}
export type TestQueryHookResult = ReturnType<typeof useTestQuery>;
export type TestLazyQueryHookResult = ReturnType<typeof useTestLazyQuery>;
export type TestSuspenseQueryHookResult = ReturnType<typeof useTestSuspenseQuery>;
export type TestQueryResult = Apollo.QueryResult<TestQuery, TestQueryVariables>;

The line TestQuery shows the response data we use in the page component. This is the last piece of code we need to work with the query response data! Now as you extract and build data from a query call, you should be able to get intellisense help to what fields you can use from the data.

Final Project Structure

Once you’ve followed all the steps in this tutorial, you should have a complete folder structure like this:

Final Run

With that, head over to the root folder and run npm start. This will start the three apps: frontend, middleware and backend:

So once the frontend is ready, if you head over to http://localhost:3000/test you should see code from TestPage, as it displays the added text from the test resolver in the middleware once it receives the data from the backend:

Push your code to the repo. This will trigger deployments in the Vercel projects. Once done, visit the frontend deployment and once there, head to /test:

If you see the same behavior as localhost, then success! Your three apps are now fully connected and ready for any and all customizations you’ll want. Should you wish to work from the last step of this tutorial directly, feel free to fork this repo: https://github.com/gfcf14/react-graphql-node-mongo-boilerplate it follows my setup of the three apps. Though you’ll have to setup the Vercel apps yourself.

With that, happy coding! Hopefully this article can enlighten anyone who reads it into the full software development experience in the web further.

--

--

No responses yet