AWS Feed
Using PostgreSQL with Spring Boot on AWS — Part 2
This is the second installment of a two-part tutorial by Björn Wilmsmann, Philip Riecks, and Tom Hombergs, authors of the book Stratospheric: From Zero to Production with Spring Boot and AWS. Björn, Philip, and Tom previously wrote the blog posts Getting started with Spring Boot on AWS Part 1 and Part 2.
In Using PostgreSQL with Spring Boot on AWS — Part 1, we explored how to deploy an RDBMS on Amazon RDS. In Part 2, we walk through using the database we deployed from a Spring Boot application.
Configuring the database in the application
To connect to the PostgreSQL database instance, our sample application needs a few additional dependencies:
org.postgresql:postgresql
contains the JDBC driver for connecting to the PostgreSQL database, whereas org.springframework.boot:spring-boot-starter-data-jpa
provides us with Spring Data JPA (and its dependencies), a widely used library for accessing databases from within Spring Boot applications.
Once we’ve started the service stack, our Spring Boot application will automatically connect to the PostgreSQL database using the standard SPRING_DATASOURCE_URL
, SPRING_DATASOURCE_USERNAME
, and SPRING_DATASOURCE_PASSWORD
parameters we set before.
Using the database for storing and retrieving todos
Now that we’ve prepared the surrounding infrastructure and our Spring Boot app is configured to use our new PostgreSQL database, we can finally put it to use in our application code.
If you’re familiar with using JPA in a Spring Boot application, you can skip this section and have a look at how to set up a local development environment for database connectivity.
The domain model of our application is structured around the Todo
entity, as shown in this diagram:
Our application follows a package-by-feature structure. This means that the feature folders collaboration
, person
, registration
, and todo
contain the code artifacts related to each of those features. These packages include controllers, service interfaces (and their implementations), Spring Data JPA repositories, and data model classes.
We’ll have a closer look at some of the classes from the dev.stratospheric.todo
package to examine how we use the newly created PostgreSQL database for our application.
We’ll focus on the Todo
class and how it’s used. This class is annotated with the @Entity
annotation from the javax.persistence
package (the API provided by JPA / Jakarta Persistence). This annotation marks the class as a database entity. By default, the unqualified class name (as opposed to its fully qualified name, which would include the package name) is used as the entity’s database table name:
Again, as with the entity name and the table name, the attribute names by default match the column names. Most of those attributes are annotated with one or more annotations. These annotations allow us to further specify rules and constraints that should apply to an attribute.
Primary key and object identity
The id
attribute is annotated with @Id
from the javax.persistence
package marking this attribute as the entity’s unique identifier (or primary key).
The @GeneratedValue
annotation (also from the javax.persistence
package) denotes that the attribute’s value is generated automatically. The strategy = GenerationType.IDENTITY
argument further specifies that this value is provided through an identity column in the database table.
Constraints and validation
Some columns are annotated with annotations from the package javax.validation.constraints
, such as @NotBlank
, @Size
, or @Future
. Such annotations allow us to define the rules—or constraints—we’d like to apply to each of the attributes and their value. For example, the todo’s title is supposed to contain a non-empty (not blank) string with 30 characters at most.
Storing and retrieving information
Now that we have defined our database entity, we can use it for storing and retrieving information. With Spring Data JPA, the abstraction for doing so is the JpaRepository
interface from the org.springframework.data.jpa.repository
package. This interface provides us with a set of methods for manipulating and retrieving data:
To use this for a specific database entity, such as our Todo
class, we must extend this interface with our own TodoRepository
interface:
This interface specifies a JpaRepository
that’s responsible for persisting Todo
entities with a Long-typed
ID. Moreover, we’ve added a findAllByOwnerEmailOrderByIdAsc()
method that allows us to find all Todos
whose owner has the email address given by the method’s email
argument. Spring Data JPA uses a query generation mechanism that derives the required SQL query from the method name (refer to Defining Query Methods from the Spring Data JPA documentation for more information).
The TodoRepository
then can be injected into other classes, like our TodoService
, via Spring’s dependency injection:
Spring will automatically inject the TodoRepository
into the constructor of TodoService
. With this dependency, we now can insert a new row into the database table by calling todoRepository.save()
.
The TodoService
, in turn, is injected into the TodoController
, again using constructor injection, where it is used for creating, retrieving, updating, and deleting Todos in various controller methods. These controller methods are mapped to HTTP paths within our application that can be accessed through HTTP GET (in case of the methods annotated with @GetMapping
) and POST (for those methods annotated with @PostMapping
) requests.
Finally, the Thymeleaf template in our application’s resources/templates folder, specifically dashboard.html
, show.html
, and edit.html
, make use of these controller methods to allow the user to display and edit todos.
Enabling local development
One thing is still missing in our setup: In most cases, we wouldn’t want to wait until our entire AWS infrastructure has been redeployed through our nearly continuous deployment pipeline after a code change. Instead, we want to test changes locally.
That’s where a local database instance comes into play. We can use Docker to spin one up by adding a service to the docker-compose.yml
file located in our application’s root directory:
With these lines added to the docker-compose.yml
file, we can now run docker-compose
up from the command line in our application’s root directory to start a PostgreSQL instance alongside other services already defined in our docker-compose.yml
.
Now, all we have to do is add these lines to a application-dev.yml
properties file in our application’s src/main/resources
folder:
Because our build.gradle
file already configures Gradle to use dev
as Spring Profile (by using -Dspring.profiles.active=dev
as a JVM argument), these settings will be picked up automatically when running ./gradlew bootrun
:
Find the source code with detailed instructions on how to build and run this application on GitHub.
Conclusion
Relational databases are an essential element of web applications, and PostgreSQL is a popular open source choice for a DBMS to run such a database on. Amazon RDS allows us to manage relational databases in the cloud, which in turn allows us to both account for non-functional requirements, such as scalability, and to provision and manage database resources in a self-service, DevOps fashion.
In this two-part tutorial, we learned about provisioning and deploying the infrastructure required for running a PostgreSQL database on Amazon RDS and had a look at how to use this database in the context of a Spring Boot application.
Using databases is but one aspect of web applications, though. In our book Stratospheric: From Zero to Production with Spring Boot and AWS, we develop an entire web application that seamlessly integrates with other common AWS services, such as Amazon Simple Queue Service (Amazon SQS), Amazon Simple Storage Service (Amazon S3), Amazon Simple Notification Service (Amazon SNS), Amazon Cognito, and Amazon Elastic Container Service (Amazon ECS) using Spring Cloud AWS. We guide readers through the steps required for getting a Spring Boot application running on AWS. This book explains how to get a Spring Boot application running on AWS. Not only do we focus on developing the application with Spring Boot, but we also take a detailed look at getting it ready for production. This includes a nearly continuous deployment pipeline, infrastructure-as-code using CloudFormation and AWS CDK, deployment patterns, and monitoring.
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.