Table of Contents [expand]
Last updated May 12, 2026
Heroku Postgres offers several features and tooling to monitor the performance of your database and diagnose potential problems. This article describes the different components for monitoring your database performance.
Expensive Queries
The leading cause of poor database performance is unoptimized queries. The list of your most expensive queries, available through the Heroku Dashboard, helps to identify and understand the queries that take the most time in your database. For more information, see the Expensive Queries article.
Logging
If your app or framework emits logs related to database access or usage, you can retrieve these messages through your app’s logs. To see a real-time tail of your app logs, use:
$ heroku logs -t -a example-app
To see logs from the Postgres service itself, use heroku logs with the process type -p postgres option to filter only the logs from the database:
$ heroku logs -p postgres -t -a example-app
To read more about Heroku Postgres log statements, see Understanding Heroku Postgres Log Statements and Common Errors.
pg:diagnose
The pg:diagnose CLI command performs various useful health and diagnostic checks that help analyze and optimize database performance. It produces a report that you can share with teammates or Heroku Support.
Carefully consider the impact to your database and app before making any changes.
pg:diagnose isn’t supported on shield Heroku Postgres plans.
$ heroku pg:diagnose DATABASE --app example-app
Report 2baea2af-a130-4a85-83be-01535abc187e for example-app::DATABASE_URL
available for one month after creation on 2026-05-07T11:36:50.245578+00:00
GREEN: Connection Count
GREEN: Long Queries
GREEN: Long Transactions
GREEN: Idle in Transaction
GREEN: Indexes
GREEN: Bloat
GREEN: Hit Rate
GREEN: Blocking Queries
GREEN: Sequences
GREEN: Unlogged Tables
GREEN: Table Transaction ID Wraparound
GREEN: Database Transaction ID Wraparound
GREEN: Schema Count
GREEN: Load
Check: Connection Count
Each Postgres connection requires memory, and database plans have a limit on the number of connections they can accept. If you’re using too many database connections, consider using a connection pooler such as PgBouncer or migrating to a larger plan with more RAM and a higher connection limit.
Checks: Long Running Queries, Long Transactions, Idle in Transaction
Long-running queries and transactions can cause problems with bloat that prevent auto vacuuming and cause followers to lag behind on replication. The reporting threshold for these queries and transactions is 1 minute (60 seconds).
Long-running queries or transactions that remain open for long periods of time can hold locks on your data, which can prevent other transactions from running. Consider canceling unexpected long-running queries with the heroku pg:kill command.
Database processes that are idle in transaction are connections that are waiting on the client to finish the transaction, either through COMMIT or ROLLBACK. Clients that improperly disconnect can leave backends in this state. If left open, you can terminate them with the heroku pg:kill --force command.
Check: Indexes
The Indexes check includes three classes of indexes:
- Never Used Indexes: Indexes that the database hasn’t used since the last manual database statistics refresh. These indexes are typically safe to drop, unless a follower is using them.
- Low Scans, High Writes: The database uses the indexes, but infrequently, relative to their write volume. The database updates the indexes on every write, and they are expensive to maintain on a high writes table. Consider the cost of slower writes against the performance improvements that these indexes provide.
- Seldom used Large Indexes: The database doesn’t use the indexes often, but they take up significant space both on disk and in cache (RAM). These indexes can still be important to your app. For example, if periodic jobs or infrequent traffic patterns use these indexes.
Only the database receiving the query tracks index usage. If you use followers for reads, this check doesn’t account for usage made against any of your database followers.
See Efficient Use of PostgreSQL Indexes for further details and best practices on database indexes.
Check: Bloat
Because Postgres uses MVCC, old versions of updated or deleted rows are invisible rather than modified in place. Under normal operation an autovacuum process goes through and asynchronously cleans these rows up. Sometimes, however, it can’t work fast enough or prevent some tables from becoming bloated. High bloat can slow down queries, waste space, and even increase load as the database spends more time looking through dead rows when running your queries.
You can manually vacuum a table with the VACUUM (VERBOSE, ANALYZE); command in psql. If you observe high bloat frequently, adjust your database, table settings, or both to make autovacuum more aggressive.
The bloat estimation calculation isn’t always accurate for tables that use columns without column statistics, such as json columns.
Check: Hit Rate
Checks the overall index hit rate, the overall cache hit rate, and the individual index hit rate per table. Databases with lower cache hit rates perform significantly worse because they hit disk instead of reading from memory. Consider migrating to a larger plan for low cache hit rates, and adding appropriate indexes for low index hit rates.
The overall cache hit rate is the ratio of table data blocks fetched from the Postgres buffer cache (shared_buffers) against the sum of cached blocks (from the OS cache) and uncached blocks read from disk. Blocks that aren’t fetched from shared_buffers can still be in the OS page cache, avoiding actual disk I/O. On larger plans with higher amounts of RAM, a lower Postgres cache hit ratio doesn’t necessarily indicate poor performance, as the OS efficiently caches frequently accessed data that doesn’t fit in shared_buffers. These two layers of caching (shared_buffers and the OS cache) means that the total memory available for caching exceeds the Postgres buffer cache.
The overall index hit rate is the ratio of index blocks fetched from the Postgres buffer cache against the sum of cached indexed blocks and uncached index blocks read from disk. Like the cache hit rate, this metric only measures the hits against shared_buffers. Index blocks not found in Postgres’ cache can still be served from the OS cache rather than result in reads from disk.
The individual index hit rate per table is the ratio of index scans against a table versus the sum of sequential scans and index scans against the table.
Monitor the Heroku Postgres metrics logs and its read IOPS metric to observe whether data is actually being read from disk. Cache hit ratios alone don’t account for OS-level caching.
Check: Blocking Queries
Some queries can take locks that block other queries from running. Normally these locks are acquired and released quickly and don’t cause any issues. In pathological situations, some queries can take locks that cause significant lock contention problems if held too long. Consider terminating locking queries with the heroku pg:kill command.
Check: Sequences
This command looks at 32-bit integer (int4) columns that have associated sequences, and reports columns that are getting close to the maximum value for 32-bit integers. Migrate reported columns to 64-bit bigint (int8) columns to avoid integer overflow from impacting your sequence IDs. An example of such a migration is ALTER TABLE products ALTER COLUMN id TYPE bigint;. Changing the column type can be an expensive operation and cause significant lock contention, especially on large tables.
To prevent performance impact on the database, the database skips this check if there are more than 100 integer (int4) columns.
Check: Unlogged Tables
This check looks for tables that have been created as UNLOGGED. You can use unlogged tables for fast data loading, but they don’t write to the write-ahead log (WAL). Continuous Protection doesn’t cover data written to these tables. The data also isn’t replicated to high-availability standby or follower databases, which leads to data loss when maintenance takes place. To prevent data loss, unless you use unlogged tables for a specific use case and understand these risks, move the data to logged tables. If the tables are large, move the data in batches to control the impact on your database’s resources and other concurrently running queries. If the tables are small, use ALTER TABLE <table> SET LOGGED to change unlogged tables to logged tables.
Check: Table Transaction ID Wraparound, Database Transaction ID Wraparound
These checks determine how close individual tables are, or a database is, to a transaction ID wraparound. Transaction ID wraparound is a rare scenario caused by autovacuum operations being unable to keep up on frequently updated tables. These tables are in danger of the transaction ID for that table, or database, wrapping around and resulting in data loss. To prevent transaction ID wraparound, Postgres prevents new writes cluster wide until the issue is resolved, impacting availability. These checks return the table and database names if over 50% of the transaction ID space has been used.
Check: Schema Count
This check counts the number of schemas in the database. It reports a yellow warning if there are more than 19 schemas, and a red warning if there are more than 50 schemas. Having many can impact Postgres performance and the ability to take successful logical backups. We recommend you maintain no more than 50 schemas on a single database.
Check: Load
There are many reasons load can be high on a database: bloat, CPU intensive queries, index building, and too much read/write activity on the database. Review your access patterns, and consider migrating to a larger plan with a more powerful processor.
Resetting Statistics
You can reset the internal Postgres statistics to make it easier to see the effects of changes.