Graphql In Action
Resolving Models And Relations
This is still a work in progress. New content is synced here as it gets ready.

Working with database models and relations

This chapter covers

  • Creating object types for database models

  • Defining a global context shared among all resolvers

  • Resolving fields from database models and transforming their names and values

  • Resolving one-to-one and one-to-many relations

  • Working with database views and join statements

Now that you’ve learned the core concepts of building a simple "static" schema and how to resolve its fields it’s time to get real and learn how to resolve fields from databases. It’s time to implement the queries of the AZdev API. We’ll go through them one by one as we designed them in Chapter 4 and we’ll learn the concepts we’ll need as we use them.

To be able to work with "Need" records in the GraphQL schema, we need to define a new custom "type" for these records just like what we did for the "numbersInRange" when we converted it from a leaf field into a more featured one. Will have to create a custom type for each entity in the AZdev API. We’ll need a "Need" type, an "Approach" type, and a "User" type.

You can name these types anything you want. These names will show up in GraphQL tools (like GraphiQL) when clients are exploring your API service. I’ll stick with the Need/Approach/User names for simplicity.

Let’s start by implementing the main Need type. Here is the schema-language text we prepared for it:

Listing 6. 1. The Need type
type Need implements Node {
  id: ID!
  createdAt: String!
  content: String!
  tags: [String!]!

  # author: User!
  # approachList: [Approach!]!
}

The first query field that’ll use this need type is the list of "featured needs" that’ll be displayed on the main page of the AZdev App. We named this field needFeaturedList:

Listing 6. 2. Using the Need type in the needFeaturedList field on Query
type Query {
  needFeaturedList: [Need!]!
}

To learn the concepts of this task in the right order, we’ll start with the simple scalar leaf fields: id, content, tags, and createdAt. We’ll implement author, approachList, and the Node interface after that.

The "featured needs" UI view will not include the list of approaches. That list will be displayed in the "single need" UI view. We’ll start with a generic full Need type in this chapter and later see how type-specificity might be used to enforce UI rules.

Here’s the simple GraphQL query that we can use to start testing this feature:

Listing 6. 3. The needFeaturedList query
query {
  needFeaturedList {
    id
    content
    tags
    createdAt
  }
}

This query should return an array of Need records in its data response. Each item in that array is an object with the 4 keys (in Listing 6.3). The values will be coming from the PostgreSQL azdev.needs table (which has columns matching these leaf field names).

Let’s implement this feature with the simplest code possible and improve it once we get it to naively work.

Make it work. Make it right. Make it fast.
— Kent Beck

We need to do 3 tasks to implement this:

  1. Define a new object type named Need which has the 4 fields (id, content, tags, and createdAt).

  2. Modify the top-level Query type to have a field named needFeaturedList that is a non-null list of non-null Need items and resolve it with an array of records from the azdev.needs table.

  3. Write any non-default resolvers for the Need type. We have to do case conversion for the creatd_at database column. We also decided to expose the tags GraphQL field as an array of strings instead of a database comma-separated string.

Let’s start with these tasks one at a time.

1.1. Defining object types

Here’s a possible implementation of the new Need type (without any resolvers):

Listing 6. 4. New file: lib/schema/types/need.js | az.dev/gia
import {
  GraphQLID,
  GraphQLObjectType,
  GraphQLString,
  GraphQLNonNull,
  GraphQLList,
} from 'graphql';

const Need = new GraphQLObjectType({
  name: 'Need',
  fields: {
    id: {
      type: new GraphQLNonNull(GraphQLID),
    },
    content: {
      type: new GraphQLNonNull(GraphQLString),
    },
    tags: {
      type: new GraphQLNonNull(
        new GraphQLList(new GraphQLNonNull(GraphQLString))
      ),
    },
    createdAt: {
      type: new GraphQLNonNull(GraphQLString),
    },
  },
});

export default Need;

The Need object here is just a direct translation of the schema-language text in Listing 6.1. The 6 simple lines in the schema-language version more than tripled with all the object-based method boilerplate code. The worst part about this is probably the type for the tags field. The simple [String!]! had to be written with three nesting of function calls:

new GraphQLNonNull(
  new GraphQLList(
    new GraphQLNonNull(
      GraphQLString
    )
  )
)

There is no debate that the schema-language text is a better way to present this type. This is why many tools were created to enable building GraphQL schemas based on schema-language texts for types and other elements. The GraphQL.js buildSchema function itself was popularized by these tools. Some tools expanded the schema-language syntax to enable modularizing a text-based schema into many parts. Other tools were introduced to merge multiple text-based schemas and resolve any conflicts between them.

All these tools are helpful and have their practical needs but I’d like to keep this book purely focused on the GraphQL.js implementation and use the natively-supported object-based method.

In the future, the GraphQL.js implementation will have an API that will allow you to use an SDL text with the object constructors method (so it’s not one way or the other). This new API is currently in its research phase and it’ll be a while before it makes it to the codebase. However, I wrote a package that allow you to use a similar idea using the current GraphQL.js implementation. I named it graphql-makers and you can read about it at jscomplete.com/graphql-makers.

1.2. The context object

The next step is to modify the top-level Query type to include the needFeaturedList field. This field is also a non-null list of non-null items where each item will have the Need type that we introduced in Listing 6.4. So the type for the needFeaturedList field is another: new GraphQLNonNull(new GraphQLList(new GraphQLNonNull(Need))).

To resolve this field, we need to execute this SQL statement on the PostgreSQL database:

Listing 6. 5. SQL statement for the needFeaturedList field
SELECT * FROM azdev.needs WHERE is_featured = true

Before we can execute this SQL query, we need to open the pool of connections to PostgreSQL. To do that, we need to import the lib/db-clients/pg.js module and invoke its default export (which we named pgClient). Just like we did when we loaded the seed data into the database.

But where exactly should we do this call now?

The pool of connections to a database should be started when the server itself is started and then made available to all the resolver functions that are going to use it. This is a lot more efficient than connecting to the database from within resolver functions.

The GraphQL.js implementation has a feature to help us make a pool of connection globally available to all resolvers. It’s called the "context" object.

The special context object enables resolver functions to communicate and share information because it gets passed to all of them (as their third argument) and they can read it and also write to it if needed. We only need a readable context for sharing the pool of database connections.

You can pass any object as the context object to the graphql executor function or to the graphqlHTTP listener function. The object that we need to make part of the global context here is the pgPool object that is returned by the pgClient function.

Here are the changes we need in lib/server.js to make the pgPool object available using the GraphQL context concept:

Listing 6. 6. Changes in lib/server.js (in bold) | az.dev/gia
import express from 'express';
import graphqlHTTP from 'express-graphql';
import { schema } from './schema/index.js';

import pgClient from './db-clients/pg.js';

const startGraphQLWebServer = async () => {
  const { pgPool } = await pgClient();

  const server = express();
  server.use(
    '/',
    graphqlHTTP({
      schema,
      context: { pgPool },
      graphiql: true,
    })
  );

  server.listen(8484, () => {
    console.log('API server is running');
  });
};

startGraphQLWebServer();

Now all resolver functions will have access to the context object and we can use the pgPool object to execute database queries in them!

The pgPool object has a query method we can use to execute a SQL statement. We can use it this way to execute the SELECT statement in Listing 6.5:

Listing 6. 7. Executing a SQL statement with pgPool
const pgResp = await pgPool.query(`
  SELECT * FROM azdev.needs WHERE is_featured = true
`);

The result of the query method is a promise that will resolve to an object which I named pgResp. This pgResp object will have a rows property holding an array of objects representing the rows returned by the database.

Listing 6. 8. Shape of the pgResp.rows property
[
  { id: 1, content: 'featured need #1', is_featured: true,  ·-·-·},
  { id: 2, content: 'featured need #2', is_featured: true,  ·-·-·},
  ·-·-·
]

Note how the pg package transforms every database row into a JavaScript object with the database field names as keys and the row values as the values for these keys. Also note how the key names use the snake-case format (e.g. is_featured).

The context object is exposed to each resolver function as the third argument (after source and args):

Listing 6. 9. The 4 arguments for each resolve functions
resolve: (source, args, context, info) => {}
The 4th info argument will have information about the execution context. For example, what field/type is this resolver associated with. It’s rarely used but will be handy in a few advanced cases.

The needFeaturedList field should be resolved with an array of Need records (and the rows property on the pgPool.query response in listing 6.9 is that exact array). Remember from the previous chapter that you can return a promise from a resolver function and GraphQL.js will do the right thing for it. We can just resolve the needFeaturedList with the promise returned by the pgPool.query function:

Listing 6. 10. Changes in lib/schema/index.js (in bold) | az.dev/gia
import {
  // ·-·-·
  GraphQLList,
} from 'graphql';

import Need from './types/need.js';

// ·-·-·

const QueryType = new GraphQLObjectType({
  name: 'Query',
  fields: {
    // ·-·-·
    needFeaturedList: {
      type: new GraphQLNonNull(
        new GraphQLList(new GraphQLNonNull(Need))
      ),
      resolve: async (source, args, { pgPool }) => {
        const pgResp = await pgPool.query(`
          SELECT * FROM azdev.needs WHERE is_featured = true
        `);
        return pgResp.rows;
      },
    },
  },
});

// ·-·-·

Go ahead and test things now. The API should now be able to answer this query:

{
  needFeaturedList {
    id
    content
  }
}

However, if you try to ask for the tags or createdAt fields, you’ll get errors:

  • For the tags field, you’ll get this error message: "Expected Iterable, but did not find one for field Need.tags."

  • For the createdAt field, you’ll get this error message: "Cannot return null for non-null field Need.createdAt".

Take a moment and think about these error messages and try to figure them out. I’ll explain them in the next section.

Since we did a SELECT * operation, all fields available in the azdev.needs table will be available on the parent source object in the Need type. However, only the properties represented by the defined fields will be available in the API. You can optimize the SQL statement to only include the fields that the API is interested in.

For example:

SELECT id, content, tags, created_at
FROM azdev.needs
WHERE is_featured = true

1.3. Transforming field names

In some cases, we need the API to represent columns and rows in the database with a different structure. Maybe the database has a confusing column name or maybe we want the API to consistently use camel-case for all field names and the database uses snake-case for its columns. This latter case is exactly what we have to deal with next. Columns on the azdev.needs table are snake-case in the database (for example, created_at) and we planned to represent all fields as camel-case in the API (createdAt). This means we cannot rely on the default resolvers like we did for the id and content fields.

With camel-case fields, the default resolver will try to find the property createdAt on the row returned by the database. That property does not exist. That’s why we got the error when we tried to ask for createdAt.

There are 3 main methods to deal with this issue.

1.3.1. Method #1:

We could simply map all properties on each row when we get data back from the database. This way the parent source object for the Need type will have the right property names and all fields can be kept using the default resolvers. For example, if we had a function caseMapper that takes an object and camel-case all of its properties, we can modify the resolver of needFeaturedList to be:

Listing 6. 11. Changing the structure of objects received from the database
resolve: async (source, args, { pgPool }) => {
  const pgResp = await pgPool.query(
    'SELECT * FROM azdev.needs WHERE is_featured = true;'
  );
  return pgResp.rows.map(caseMapper);
},
The caseMapper function implementation is omitted here but you can use a function like camelizeKeys from the "humps" node package. That function even supports converting an array of objects and it’ll camel-case all properties on all objects in that array.

Can you spot a problem with this method? Actually, can you spot TWO problems with it?

Not only are we looping over each row in the returned set (with the map method) but the caseMapper function loops over the properties of each row.

This is probably not a big deal if you’re working with small sets of data but it would be ideal if the database driver itself supported this transformation. Unfortunately, the pg library we’re using does not. However, PostgreSQL itself has something up its sleeves to help us avoid needing to do transformation in the first place! That’s method #2.

1.3.2. Method #2:

We could use PostgreSQL column alias feature to make the rows returned natively as camel-case by PostgreSQL itself. However, this solution requires listing all the leaf fields in the SELECT statement itself. For example, here’s a version of the needFeaturedList resolver function to implement this method:

Listing 6. 12. Using PostgreSQL aliases to get everything in camel-case natively
resolve: async (source, args, { pgPool }) => {
  const pgResp = await pgPool.query(`
    SELECT id, content, tags, created_at as "createdAt"
    FROM azdev.needs
    WHERE is_featured = true
  `);
  return pgResp.rows;
},

Note how I use the as "createdAt" syntax to rename the returned column. The quotes around the alias are required because PostgreSQL is case insensitive and to force it to behave otherwise you need to use quotes.

I like this method a bit better than the first method because we don’t need to perform extra loops with it. Data comes from PostgreSQL in the exact shape that we need. The listing of fields in the SELECT statement can be automated with the right tools. However, for the purposes of this book, I am going to use method #3 because it’s the one supported by GraphQL.js itself. If, for example, you’re using a database that does not support this column alias feature, method #3 is the way to go.

1.3.3. Method #3:

The third way to deal with the case issue is to create custom resolvers for the fields that need to be mapped. I like the readability of this method and it’s also helpful when you need to perform other custom logic on the value, not just map it as is.

For example, we can change the createdAt field in Listing 6.4 to include this resolve function:

Listing 6. 13. Adding custom resolve functions to leaf fields
createdAt: {
  type: new GraphQLNonNull(GraphQLString),
  resolve: (source) => source.created_at,
},

This takes care of the case issue because we are resolving a createdAt field using the created_at property available on the parent source object (which is the row object coming from the database). You’ll need to do this for each multi-word field.

If you test the API now and ask for a createdAt field on the needFeaturedList field, the API will happily reply with a value. That value however will be a 13-digit number!

ch06 fig 01 gqlia
Figure 6. 1. The createdAt field is a 13-digit number

1.4. Transforming field values

GraphQL default serialization for date objects in JavaScript is to call the valueOf method on them and that method is equivalent to calling getTime (which returns the number of milliseconds since midnight 01 January, 1970 UTC). If we want to serialize fields (including date fields) differently we can do that in their custom resolver functions. For example, let’s serialize all date-time fields for the AZdev API using the UTC ISO format. We can make use of the JavaScript toISOString method for this.

We’ll need to change the createdAt field’s resolve function to something like:

Listing 6. 14. Changing date fields serializations
createdAt: {
  type: new GraphQLNonNull(GraphQLString),
  resolve: (source) => source.created_at.toISOString(),
},

Now the API will display values of createdAt using the ISO format:

ch06 fig 02 gqlia
Figure 6. 2. The createdAt field is as an ISO string

What about the tags field? currently the API is displaying this error for it: "Expected Iterable, but did not find one for field Need.tags."

This is because we defined the tags field as a GraphQLList type. The GraphQL executer expects its resolved value to be an "Iterable", like an array. The default resolver for the tags field is currently resolving with what’s in the tags database column and that’s a string of comma-separated values (for example, "node,git"). We need to transform this value into an array of strings instead (so ["node", "git"]). We do that with a custom resolver function:

Listing 6. 15. Any logic can be used in resolver functions
tags: {
  type: new GraphQLNonNull(
    new GraphQLList(new GraphQLNonNull(GraphQLString))
  ),
  resolve: (source) => source.tags.split(','),
},

With that, the resolver will return an array when asked for the tags property.

ch06 fig 03 gqlia
Figure 6. 3. The tags comma-separated value exposed as array in the API

As you can see, it is fairly easy to control the shape of the GraphQL schema and use powerful transformation on the raw data returned by the database.

1.5. Separating interactions with PostgreSQL

Before we continue implementing the author/approach relations on the Need type let’s do a small refactoring. Instead of using SQL statements directly in resolver functions let’s introduce a module responsible for communicating with PostgreSQL and just use that module’s API in the resolver functions.

This separation of responsibilities will generally improve the readability of the API’s code. The logic to fetch things from PostgreSQL will not be mixed together with the logic to transform raw data into public API. This new module will also improve the maintainability of the code! If the database driver changes its API or if a decision was made to use a different database driver all together, you can just make these changes in one place instead of many. You’ll also have one place where you can add logging or any other diagnostics around database calls. It’ll also be a lot easier to test this new module on its own in isolation of the other logic in resolver functions and to test the logic in resolvers by mocking this new module out.

I’ll name this new module pgApi. We’ll expose it in the context object instead of the driver’s native pgPool object and make all read and write interactions with PostgreSQL through this new pgApi module.

Let’s also design the pgApi module to group interactions using the main table for each interaction. For example, instead of doing a call like pgApi.getAllFeaturedNeeds(), let’s have a needs object on pgApi (to match the needs table in the database) and have functions on that needs object. For example, pgApi.needs.allFeatured(). I think that’ll make the module a bit more maintainable as well.

When you have a refactoring in mind, a good step to take before starting that refactor is to write some test cases and make sure they continue to pass after your refactor. I’ll skip testing here but if you’re interested in seeing the tests I ended up adding to the project, check out the GitHub repository at az.dev/contribute.

Let’s also move the line in lib/server.js where we called the pgClient function to get a pgPool into this new pgApi module. This is an asynchronous operation. This means we need the pgApi module to be wrapped in an async function. I’ll name this function pgApiWrapper and make it the default export in the new db-api/pg.js file where I am going to put the code for this new module.

Here’s the implementation I came up with. Put this in lib/db-api/pg.js:

Listing 6. 16. New file: lib/db-api/pg.js | az.dev/gia
import pgClient from '../db-clients/pg.js';

const sqls = {
  featuredNeeds:
    'SELECT * FROM azdev.needs WHERE is_featured = true;',
};

const pgApiWrapper = async () => {
  const { pgPool } = await pgClient();

  const pgQuery = (text, params) => pgPool.query(text, params);

  return {
    needs: {
      allFeatured: async () => {
        const pgResp = await pgQuery(sqls.featuredNeeds);
        return pgResp.rows;
      },
    },
  };
};

export default pgApiWrapper;

Note how I separated the SQL statements into their own object. Once we have many statements in the API, it’ll be very helpful to see all of them in one place. Note also how I introduced a new function pgQuery that’s basically an alias to pgPool.query. This is because pgPool.query is the current driver’s method while pgQuery is now my own method whose implementation I can change any time in just one place.

Wrapping 3rd-party APIs is generally a good practice but don’t overdo it! For example, I have not wrapped the GraphQL.js API yet because the whole structure of the project so far depends on it. The AZdev API code is not just using GraphQL.js; it’s built around it. When GraphQL.js makes a non-backward compatible change to its API it’ll probably be time for a complete project overhaul. However, the other benefits to wrapping a 3rd-party API still apply. For example, if we’d like to change the "syntax" of creating type objects to reduce the boilerplate and make use of something similar to the schema-language then we can introduce a wrapper for that.

Now we need to change the context object in lib/server.js to use the new pgApiWrapper function instead of the driver-native pgClient function:

Listing 6. 17. Changes in lib/server.js (in bold) | az.dev/gia
import pgApiWrapper from './db-api/pg.js';    (1)

const server = express();

const startGraphQLWebServer = async () => {
  const pgApi = await pgApiWrapper();         (2)

  server.use(
    '/',
    graphqlHTTP({
      schema,
      context: { pgApi },
      graphiql: true,
    })
  );
1 This line replaces the db-clients import line
2 This line replaces the pgClient() call line

Finally, we need to change the top-level schema file. We need to change the resolve function for needFeaturedList to use the new pgApi instead of issuing a direct SQL statement:

Listing 6. 18. Changes in lib/schema/index.js (in bold) | az.dev/gia
needFeaturedList: {
  type: new GraphQLNonNull(new GraphQLList(new GraphQLNonNull(Need))),
  resolve: async (source, args, { pgApi }) => {
    return pgApi.needs.allFeatured();
  },
},

That’s it. You can test all these changes using the same query we’ve been using so far. Nothing changed on the public API service but a lot has changed in the code base. The code is off to a better start.

It’s now time for us to talk about resolving relations. This will highlight one of the biggest challenges when creating GraphQL API. The infamous N+1 queries problem.

2. Resolving relations

The remaining fields on the Need type are author and approachList. We’ll need to implement 2 new GraphQL types for them. I’ll name them Author and Approach.

These fields will not be leaf fields in a query. They represent relations. A Need has one Author and many Approaches. To resolve these fields, the GraphQL server will have to execute SQL statements over many tables and return objects from these tables.

When we’re done implementing the author and apporachList fields, the API server should accept and reply to this query:

Listing 6. 19. The needFeaturedList complete query | az.dev/gia
{
  needFeaturedList {
    id
    content
    tags
    createdAt

    author {
      id
      email
      name
      createdAt
    }

    approachList {
      id
      content
      createdAt

      author {
        id
        email
        name
        createdAt
      }
    }
  }
}

This is the complete query that should be supported by the API service. Note the nested fields Need → Author and Need → Approach → Author.

An Approach also has an Author and to complete the needFeaturedList top-level field we’ll have to implement that relation as well.

With this query, we would like to get all information about all featured needs, who authored them, what approaches are defined on them, and who authored these approaches!

2.1. Resolving a one-to-one relation

The author field has to be resolved from the azdev.users table. The foreign key that connects a Need object to a User object is the user_id field on the azdev.needs table. When we resolved the needFeaturedList field with a list of Need objects each of these objects had a value in their user_id property. For each of them, we have to execute this other SQL statement to get information about the User who authored it:

Listing 6. 20. In lib/db-api/pg.js: SQL statement to get information about a single user | az.dev/gia
const sqls = {
  // ·-·-·
  userInfo: 'SELECT * FROM azdev.users WHERE id = $1',
};

Note how this userInfo SQL statement has a $1 in it. This is new. It’s the syntax we can use with the pg driver to insert a variable into the SQL statement without resorting to string concatenation. The statement is expected to be executed with 1 variable and that variable will be used to replace the $1 part.

Next we need to design a function in the pgApi module to execute the new userInfo SQL statement. Let’s design that function to accept an userId value as an argument. I’ll name this function byId and put it under a new users property on the top-level pgApi.

Listing 6. 21. In lib/db-api/pg.sql: New function to use the userInfo SQL statement | az.dev/gia
const pgApiWrapper = async () => {
  // ·-·-·
  return {
    // ·-·-·
    users: {
      byId: async (userId) => {
        const pgResp = await pgQuery(sqls.userInfo, [userId]);
        return pgResp.rows[0];
      },
    },
  };
};

The [userId] argument to pgQuery is an array of positional variables needed in the query. The first item in that array will be $1 in the query and so on. This syntax matches what the pg driver needs (although we can change it if we want because of the wrapper we introduced).

Note how I returned the first row from the query response (with pgResp.rows[0]). We know the userInfo SQL statements returns either 1 row or nothing at all (because the id column is unique). It’s not a list of rows like featuredNeeds. The pg driver always returns the rows property on its response as an array. Even when it’s just one row.

Note how I am designing the interactions with PostgreSQL first here instead of starting with the GraphQL type and resolver functions and working my way down to the PostgreSQL interactions (which is what we did for the needFeaturedList field). What’s important to note here is that we can do each side of this tasks in complete isolation from the other! We’ll do the approaches relation next starting from the resolver function as well.

To make the GraphQL server aware of this author field we need to define the User type. Everything in a GraphQL schema has to have a type. In the schema-language text, we had this structure for the User type:

Listing 6. 22. The User type
type User {
  id: ID!
  email: String
  name: String
  createdAt: String!
}

Remember the 3 tasks we did for the Need type? We need to do something similar for the User type.

  1. Define a new object type named User which has the 4 fields (id, email, name, and createdAt).

  2. Modify the Need type to have a field named author that is a non-null object of type User (the new one) and resolve this field with a record from the azdev.users table using the new byId function in Listing 6.21.

  3. Write any non-default resolvers for the User type. We have to do case conversion for the creatd_at database column. Let’s also combine the db first_name and last_name columns on table azdev.users into a single name field for the API.

You can do these tasks in any order but I like to start with #1, then do #3 because it’s related, and then do #2.

Here’s a possible implementation of the new User type along with its non-default resolvers for name and createdAt:

Listing 6. 23. New file: lib/schema/types/user.js | az.dev/gia
import {
  GraphQLID,
  GraphQLObjectType,
  GraphQLString,
  GraphQLNonNull,
} from 'graphql';

const User = new GraphQLObjectType({
  name: 'User',
  fields: {
    id: {
      type: new GraphQLNonNull(GraphQLID),
    },
    email: {
      type: GraphQLString,
    },
    name: {
      type: GraphQLString,
      resolve: ({ first_name, last_name }) => `${first_name} ${last_name}`,
    },
    createdAt: {
      type: new GraphQLNonNull(GraphQLString),
      resolve: ({ created_at }) => created_at.toISOString(),
    },
  },
});

export default User;

Note how for the resolve functions of both the name and createdAt field, I destructured the properties that are to be used in the resolver out of the first source argument. You can do that if you want to (but you don’t have to).

To use this new User type, we need to import it in the Need type and make the new author field use it. To resolve the author field, we just make a call to the byId function we added to pgApi:

Listing 6. 24. Changes in lib/schema/types/need.js (in bold) | az.dev/gia
import User from './user.js';

const Need = new GraphQLObjectType({
  name: 'Need',
  fields: {
    // ·-·-·

    author: {
      type: new GraphQLNonNull(User),
      resolve: (source, args, { pgApi }) =>
        pgApi.users.byId(source.user_id),
    },
  },
});

That’ll do it. You can test this new relation with this query:

Listing 6. 25. Query to test the Need/Author relation
{
  needFeaturedList {
    content
    author {
      id
      email
      name
      createdAt
    }
  }
}

The API will display the information about the Author for each featured Need (which is the same test record in the seed data we’re using):

ch06 fig 04 gqlia
Figure 6. 4. Getting Author information for each Need object

2.1.1. Dealing with null values

There is a small problem in the data response in Figure 6.4. The Author name was returned as "null null". Why is that?

The null concept is confusing. Different coders associate different meanings to it. You need to be careful to always consider the possibility of dealing with it. You should always ask yourself the "what if this is null" question about every variable you use in your code. This is one of the reasons why languages like TypeScript/Flow are popular. They can help detect these problems.

When we used the template string ${first_name} ${last_name} we should have asked ourselves that question! What if these properties are null? JavaScript will just insert "null" as a string there. That is what’s happening in Figure 6.4. How do we solve it?

First of all, do we need to have first_name and last_name as nullable columns in the database? Will there ever be a different semantic meaning between null and empty string in these columns? If not, which is the most likely answer here, then it would have been a better design decision to make these fields non-null in the database itself and maybe make them default to an empty string.

For the sake of example, let’s assume that we don’t have control over the structure of the database table and/or we cannot fix the data that’s already there. This does not mean we should leak these problems out to the consumers of this API. We ca make the API’s name field non-null and make it always return either the name or an empty string instead of nulls or worse, nulls casted in a string.

There are countless ways to implement that. Here’s one:

Listing 6. 26. Changes to the name field in lib/schema/types/user.js | az.dev/gia
name: {
  type: new GraphQLNonNull(GraphQLString),
  resolve: ({ first_name, last_name }) =>
    [first_name, last_name].filter(Boolean).join(' '),
},

This way the API will always return a string that will ignore null values in either first_name or last_name or in both:

ch06 fig 05 gqlia
Figure 6. 5. Making the name field always return a value (that could be empty string)

2.1.2. The N+1 queries problem

Now that we have implemented a relation and made the GraphQL server execute multiple SQL statements for it, we can talk about the N+1 queries problem which is the first big challenge when implementing GraphQL services.

To see this problem in action, you’ll need to enable logging for your PostgreSQL service and tail the logs while you execute GraphQL queries.

How to enable logging for PostgreSQL will be different based on your platform, OS, version, and many other factors so you’ll need to figure this one out on your own. Make sure the logs print a SQL query every time it gets executed on the PostgreSQL service.

Once you’re tailing the logs, execute the query in Listing 6.25 and find every instance in the log related to executing a SQL statement. Here are the SQL queries that were executed on my PostgreSQL server when I tested this:

Listing 6. 27. Excerpt from my PostgreSQL logs showing the N+1 problem
LOG:  statement: SELECT * FROM azdev.needs WHERE is_featured = true;
LOG:  execute <unnamed>: SELECT * FROM azdev.users WHERE id = $1
DETAIL:  parameters: $1 = '1'                                          (1)
LOG:  execute <unnamed>: SELECT * FROM azdev.users WHERE id = $1
DETAIL:  parameters: $1 = '1'
LOG:  execute <unnamed>: SELECT * FROM azdev.users WHERE id = $1
DETAIL:  parameters: $1 = '1'
1 "1" is the ID value for the user I used in the seeds data. You might have a different value.
PostgreSQL will likely log a lot more lines around these execution. For example, you might see parse/bind lines as well. Find the ones that "execute" statements and focus on these here.

Why are we executing 4 SQL queries? Because we have 1 main query (for Needs) and we have 3 featured Need records in the seed data we’re using. For each Need record we’re asking the database about its associated User record. That’s 3 queries for users + 1 main query. This 3+1 is the N+1 problem. If we had 41 featured Need objects in the azdev.needs table, we’ll be executing 42 SQL queries here.

Clearly this is a problem. We should not be doing that.

There are many ways to fix this. I’ll show you one of them here and we’ll see a better one in the next chapter.

An easy way to fix this problem is through the direct use of database joins (or database views that are based on joins). Database joins are powerful. You can form a single SQL query that gets information from 2 or more tables at once.

For example, if we’re to find a Need record information and get the information for its associated User in the same SQL response we can do a join like:

Listing 6. 28. In lib/db-api/pg.js | JOIN SQL statement (in bold) | az.dev/gia
const views = {
  needsAndUsers: `
    SELECT n.*,
      u.id AS author_id,
      u.email AS author_email,
      u.first_name AS author_first_name,
      u.last_name AS author_last_name,
      u.created_at AS author_created_at
    FROM azdev.needs n
    JOIN azdev.users u ON (n.user_id = u.id)
  `,
};

The needsAndUsers string can act like a view for us and we can use it to create an actual database view object if want to. However, let’s just use it inline for this example.

ch06 fig 06 gqlia
Figure 6. 6. The one SQL view that has both Need and Author info

Note how I used column aliases to prefix the users table columns with "author_". If we don’t do that, there might be a conflict in column names (for example, both tables have an id field). This prefixing will also make it easier for us to implement the GraphQL resolvers for this relation. That’s why I prefixed all the columns.

To use the needsAndUsers view, instead of selecting from the azdev.needs table for the featuredNeeds SQL query we can select from the new needsAndUsers view:

Listing 6. 29. Using a join view in SQL | az.dev/gia
const sqls = {
  featuredNeeds: `
    SELECT *
    FROM (${views.needsAndUsers}) nau
    WHERE is_featured = true;
  `,
};

With that, the parent source object used to resolve the Need type will also have author_ prefixed columns that hold the author information inline in the same object. No further SQL queries are needed. However, we need to "extract" the prefixed columns into an object suitable to be the parent source object for the User type resolvers. Let’s create a utility function for that. I’ll name it extractPrefixedColumns. Here’s how we’ll use it in the Need type:

Listing 6. 30. Change the resolve function for the Need.author field | az.dev/gia
import { extractPrefixedColumns } from '../utils.js';

const Need = new GraphQLObjectType({
  name: 'Need',
  fields: {
    // ·-·-·

    author: {
      type: new GraphQLNonNull(User),
      resolve: prefixedObject =>
        extractPrefixedColumns({ prefixedObject, prefix: 'author' }),
    },
  },
});

The implementation of extractPrefixedColumns can be a simple reduce call to filter the columns and only include the prefixed ones but without their prefixes:

Listing 6. 31. New function in lib/utils.js | az.dev/gia
export const extractPrefixedColumns = ({ prefixedObject, prefix }) => {
  const prefixRexp = new RegExp(`^${prefix}_(.*)`);
  return Object.entries(prefixedObject).reduce((acc, [key, value]) => {
    const match = key.match(prefixRexp);
    if (match) {
      acc[match[1]] = value;        (1)
    }
    return acc;
  }, {});
};
1 match[1] here will be the prefixed column name without the prefix part.

That’s it! You can test the query in Listing 6.25 and it’ll work exactly the same except now instead of N+1 executed statements in the logs there will be exactly 1 statement:

Listing 6. 32. Excerpt from my PostgreSQL logs showing how only 1 query was executed
LOG: statement: SELECT * FROM (
      SELECT n.*,
        u.id AS author_id,
        u.email AS author_email,
        u.first_name AS author_first_name,
        u.last_name AS author_last_name,
        u.created_at AS author_created_at
      FROM azdev.needs n
      JOIN azdev.users u ON (n.user_id = u.id)
    ) nau WHERE is_featured = true;

This method is simple. It’s efficient in terms of communicating with PostgreSQL. It does however mean that for each returned row we need to perform a loop over its objects to extract the prefixed keys. This can be improved in the code by changing the Author type to resolve directly with the prefixed values. However, that means adding more complexity that I think will make the code less readable. There is a better way, and it’s what we’ll discuss in the next chapter. I’ll undo all the view-based changes that I made to solve the N+1 problem and we’ll go with the other solution for it.

However, before we talk about the other solution, let’s resolve the last remaining relation under needFeaturedList; The list of Approaches (and their Authors).

2.2. Resolving a one-to-many relation

We’ll implement the approachList field starting from the types and resolvers. We need to modify the Need type to add the new approachList field. That field is a non-null list of non-null Approach objects (which is the new GraphQL type for the Approach model that we need to introduce). To resolve the approachList field, we need a new function in the pgApi that takes a needId and return an array of Approach objects associated with it. Let’s name this function approachList.

Listing 6. 33. Changes in lib/schema/types/need.js (in bold) | az.dev/gia
import Approach from './approach.js';

const Need = new GraphQLObjectType({
  name: 'Need',
  fields: {
    // ·-·-·
    approachList: {
      type: new GraphQLNonNull(
        new GraphQLList(new GraphQLNonNull(Approach))     (1)
      ),
      resolve: (source, args, { pgApi }) =>
        pgApi.needs.approachList(source.id),              (2)
    },
  },
});
1 Approach is the new GraphQL type we need to introduce
2 approachList receives a needId value and should return a list of Approach objects
This code uses 2 things that we do not have yet: The Approach type and the pgApi.needs.approachList function.

We’ll need to implement the new Approach type next. This is the schema-language text we have for it:

Listing 6. 34. The Approach type in the schema-language text
type Approach implement Node {
  id: ID
  createdAt: String!
  content: String!
  voteCount: Int!
  author: User!
  detailList: [ApproachDetail!]!  (1)
}
1 I’ll implement detailList in Chapter 8

Nothing is new about the implementation of this type. We can use the default resolvers for id and content, a case-mapping for voteCount, the same ISO casting for createdAt, and for the author field we can use the same code we used for the Need type’s author field.

Listing 6. 35. New file in lib/schema/types/approach.js | az.dev/gia
import {
  GraphQLID,
  GraphQLObjectType,
  GraphQLString,
  GraphQLInt,
  GraphQLNonNull,
} from 'graphql';

import User from './user.js';

const Approach = new GraphQLObjectType({
  name: 'Approach',
  fields: {
    id: {
      type: new GraphQLNonNull(GraphQLID),
    },
    content: {
      type: new GraphQLNonNull(GraphQLString),
    },
    voteCount: {
      type: new GraphQLNonNull(GraphQLInt),
      resolve: ({ vote_count }) => vote_count,
    },
    createdAt: {
      type: new GraphQLNonNull(GraphQLString),
      resolve: ({ created_at }) => created_at.toISOString(),
    },
    author: {
      type: new GraphQLNonNull(User),
      resolve: (source, args, { pgApi }) =>
        pgApi.users.byId(source.user_id),
    },
  },
});

export default Approach;

The approachList field has to be resolved from the azdev.approaches table. The foreign key that connects a Need object to a list of Approach objects is the need_id field in the azdev.approaches table. For each resolved Need object, we need to issue this SQL statement to get the information about the list of Approach objects available under it:

Listing 6. 36. Changes in lib/dib-api/pgs.js: New SQL statement (in bold) | az.dev/gia
const sqls = {
  // ·-·-·
  approachesForNeed:
    'SELECT * FROM azdev.approaches WHERE need_id = $1',
};

Note how this SQL statement also uses a $1 variable because we’ll have to give it a Need ID value.

Naming is hard and I’m probably not very good at it. In the current code-base for the AZdev API, you might see different names from what I initially used here in the book. In fact, if you can think of better names than what’s currently in the AZdev API GitHub repository please open a pull request!

You can find the GitHub repo for the AZdev project at az.dev/contribute.

The new approachesForNeed SQL statement will be used by the approachList function that we’ll have to implement next. That function has a needId parameter and we’ve designed it to be under the needs property on the top-level pgApi:

Listing 6. 37. Changes in lib/db-api/pg.js (in bold) | az.dev/gia
const pgApiWrapper = async () => {
  // ·-·-·
  return {
    needs: {
      // ·-·-·
      approachList: async (needId) => {
        const pgResp = await pgQuery(sqls.approachesForNeed, [
          needId,
        ]);
        return pgResp.rows;
      },
    },
    // ·-·-·
  };
};

I hope this is getting easier for you! We still have a lot more examples to go through but the needFeaturedList example is complete! Go ahead and test the full query for it from Listing 6.19:

ch06 fig 07 gqlia
Figure 6. 7. The response for the complete needFeaturedList query

Guess how many SQL statements we’re sending to PostgreSQL to satisfy this GraphQL query?!

  • 1 for the main featured Need list

  • 1 for each of Need’s Author info (3)

  • 1 for each of Need’s list of Approaches (3)

  • 1 for each Approach on each Need to get that Approach’s Author info. We have a total of 5 Approaches in the seed data.

That’s a total of 11 SQL statements! We can still fix this with database views but it’ll add a lot more complexity to the code. Let’s explore the better option which is to use the Data Loader concept. We’ll talk about that in the next chapter.

3. Summary

  • Start with the simplest implementations you can think of. Make things work then improve on your implementations.

  • The object-based method for creating GraphQL schema is a lot more verbose that the schema-language method making it less-readable.

  • You can use the GraphQL context object to make a pool of db connections available to all resolver functions.

  • You can use fields' resolvers to transform the names and values of your data elements. The GraphQL API does not have to match the structure of the data in the database.

  • Try to separate the logic for database interactions from other logic in your resolvers.

  • It’s a good practice to wrap 3rd party APIs with your own calls. This gives you some control over their behavior and makes the code a bit more maintainable going forward.

  • Resolving database relations involve issuing SQL statements over many tables. This cause an N+1 queries problem by default because of the graph-resolving nature in GraphQL. We can solve this problem using database views but that complicates the code in the GraphQL service. In the next chapter, we’ll learn about Data Loader which is a better way to deal with the N+1 problem and making your GraphQL service more efficient in general.