Understanding Heroku Postgres Log Statements and Common Errors
Last updated May 30, 2024
Table of Contents
- LOG: duration: 3.565 s …
- LOG: checkpoint starting…
- LOG: could not receive data from client: Connection reset by peer
- LOG: unexpected EOF on client connection
- LOG: [Errno 104] Connection reset by peer
- FATAL: too many connections for role
- FATAL: could not receive data …
- FATAL: role “role-name"…
- FATAL: terminating connection due to administrator command
- FATAL: remaining connection slots are reserved for non-replication superuser connections
- FATAL: no pg_hba.conf entry for host "…”, user “u…”, database “d…”, SSL off
- temporary file: path “file path”, size “file size”
- PGError: permission denied for relation
- PGError: operator does not exist
- PGError: relation “table-name” does not exist
- PGError: column “column-name” cannot…
- PGError: SSL SYSCALL error: EOF detected
- PGError: prepared statement “a30” already exists
- PGError: could not write block blockid of temporary file: No space left of device
- Out of Memory Errors
Heroku Postgres logs to the logplex which collates and publishes your application’s log-stream. You can isolate Heroku Postgres events with the heroku logs
command by filtering for the postgres
process.
Logs are available on Standard-tier or higher databases. They aren’t available on Essential-tier databases.
$ heroku logs -p postgres -t
2012-11-01T17:41:42+00:00 app[postgres]: [15521-1] [CHARCOAL] LOG: checkpoint starting: time
2012-11-01T17:41:43+00:00 app[postgres]: [15522-1] [CHARCOAL] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 transaction log file(s) added, 0 rem...
Besides seeing system-level Postgres activity, these logs are also useful for understanding your application’s use of Postgres and for diagnosing common errors. This article lists common log statements, their purpose, and any action that can be taken.
Find the Heroku Postgres Metrics Logs for your database by filtering for the heroku-postgres
process. These metrics logs are independent of standard logs emitted by Postgres, which uses the postgres
process.
LOG: duration: 3.565 s …
[12-1] u8akd9ajka [BRONZE] LOG: duration: 3.847 s statement: SELECT "articles".* FROM "articles"...
Queries taking longer than 2 seconds are logged so they can be identified and optimized. Although small numbers of these long-running queries don’t adversely affect application performance, a large quantity may.
Ideally, frequently used queries must be optimized to require less than 10 ms to execute. Queries are typically optimized by adding indexes to avoid sequential scans of the database. Use EXPLAIN to diagnose queries.
LOG: checkpoint starting…
2012-11-01T17:41:42+00:00 app[postgres]: [15521-1] [CHARCOAL] LOG: checkpoint starting: time
2012-11-01T17:41:43+00:00 app[postgres]: [15522-1] [CHARCOAL] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 transaction log file(s) added, 0 rem...
LOG: checkpoint starting
and the corresponding LOG: checkpoint complete
statements are part of Postgres’ Write-Ahead Logging (WAL) functionality. Postgres automatically puts a checkpoint in the transaction log every so often. You can find more information here.
These statements are part of normal operation and no action is required.
LOG: could not receive data from client: Connection reset by peer
See unexpected EOF on client connection
LOG: unexpected EOF on client connection
app[postgres]: LOG: could not receive data from client: Connection reset by peer
app[postgres]: LOG: unexpected EOF on client connection
heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path=/crash host=pgeof-1234567890ab.herokuapp.com dyno=web.1 connect=1ms service=10ms status=503 bytes=0
heroku[web.1]: Process exited with status 1
heroku[web.1]: State changed from up to crashed
Although this log is emitted from Postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to Postgres, and didn’t clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.
If you aren’t seeing your application’s backtrace, you must ensure that you are, in fact, logging to stdout (instead of a file) and that you have stdout sync’d.
LOG: [Errno 104] Connection reset by peer
This error can happen as a result of one of several intermittent network connectivity issues. If you’re seeing this only intermittently, Connect detects the error and retries the sync operation shortly thereafter. If you’re seeing this error message, you can contact Heroku support as the underlying cause may require an engineer to resolve.
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This error occurs on Essential-tier databases plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
FATAL: could not receive data …
FATAL: could not receive data from WAL stream: SSL error: sslv3 alert unexpected message
Replication from a primary database to a follower was interrupted either because of a transient network error or because SSL failed to renegotiate. This is a transient problem and Postgres automatically recovers.
You can find out the current number of commits a follower is behind by using heroku pg:info
. Each follower has a “Behind By” entry that indicates how many commits the follower is behind its master.
$ heroku pg:info --app example-app
=== HEROKU_POSTGRESQL_WHITE
...
Following HEROKU_POSTGRESQL_LAVENDER (DATABASE_URL)
Behind By 125 commits
FATAL: role “role-name"…
FATAL: role "u8akd9ajka" is not permitted to log in (PG::Error)
This error occurs when you’re trying to connect to a database with credentials that no longer exist. This can happen for several reasons, including:
- The database credentials rotated but you’re still using the old ones to connect to it. In the specific case of Essential-tier databases, credentials rotate as part of regular maintenance tasks.
- A [Essential-tier database was deprovisioned but you’re still trying to connect to it.
FATAL: terminating connection due to administrator command
FATAL: terminating connection due to administrator command
This message indicates a backend connection was terminated. This can happen when a user issues pg:kill
from the command line client, or similarly runs SELECT pg_cancel_backend(pid);
from a psql session.
FATAL: remaining connection slots are reserved for non-replication superuser connections
FATAL: remaining connection slots are reserved for non-replication superuser connections
Each database plan has a maximum allowed number of connections available, which varies by plan. This message indicates you’ve reached the maximum number allowed for your applications, and remaining connections are reserved for super user access (restricted to Heroku Postgres staff). See Heroku Postgres Production Tier Technical Characterization for details on connection limits for a given plan.
FATAL: no pg_hba.conf entry for host "…”, user “u…”, database “d…”, SSL off
You’re attempting to establish a database connection without SSL. SSL is required for all connections. See the documentation for your postgres driver on how to establish encrypted connections.
temporary file: path “file path”, size “file size”
temporary file: path "base/pgsql_tmp/pgsql_tmp23058.672", size 1073741824
We configure Postgres to log temporary file names and sizes when the size exceeds 10,240 kilobytes. Temporary files can be created when performing sorts, hashes or for temporary query results, and log entries are made for each file when it’s deleted.
This log entry is just informational, as creating a large number of temporary files impacts query performance.
PGError: permission denied for relation
PGError: ERROR: permission denied for relation table-name
Heroku Postgres Essential-tier databases have row limits enforced. When you are over your row limit and attempt to insert data you see this error. Upgrade to a Standard-tier or higher database to remove this constraint or reduce the number of total rows.
PGError: operator does not exist
PGError: ERROR: operator does not exist: character varying = integer
Postgres is more sensitive with data types than MySQL or SQlite. Postgres checks and throws errors when an operator is applied to an unsupported data type. For instance, you can’t compare strings with integers without casting.
Make sure the operator is adequate for the data type or that the necessary type casts have been applied.
PGError: relation “table-name” does not exist
PGError: ERROR: relation "documents" does not exist
This error is the standard message displayed by Postgres when a table doesn’t exist. That means your query is referencing a table that isn’t on the database.
Make sure your migrations ran normally, and that you’re referencing a table that exists.
PGError: column “column-name” cannot…
PGError: ERROR: column "verified_at" cannot be cast to type "date"
This error occurs when Postgres doesn’t know how to cast all the row values in that table to the specified type. Most likely it means you have an integer or a string in that column.
Inspect all affected column values and manually remove or translate values that can’t be converted to the required type.
PGError: SSL SYSCALL error: EOF detected
Errors with similar root causes include:
no connection to the server
SSL error: decryption failed or bad record mac
could not receive data from server: Connection timed out
These errors indicate a client-side violation of the wire protocol. This happens for one of two reasons:
- The Postgres connection is shared between more than one process or thread. Typical offenders are Resque workers or Unicorn. Be sure to correctly establish the PG connection after the fork or thread has initialized to resolve this issue.
- Abrupt client (application side) disconnections. This can happen for many reasons, from your app crashing, to transient network availability. When your app tries to issue a query again against Postgres, the connection is gone, leading to a crash. When Heroku detects a crash, we kill that dyno and start a new one, which re-establishes the connection.
PGError: prepared statement “a30” already exists
This error is similar to the above. There’s no protocol violation, but the client is mistakenly trying to set up a prepared statement with the same name as an existing one without cleaning up the original. (The name of the prepared statement in the error varies).
This is also typically caused by a Postgres connection shared improperly between more than one process or thread.
PGError: could not write block blockid of temporary file: No space left of device
Your query can’t run because there isn’t enough space on disk to store the temporary tables needed to run your query. Temporary tables are automatically dropped at the end of your database session.
The usual culprits are queries with too many complex subqueries, or that use DISTINCT
: there are some tips in our documentation that could be helpful.
Out of Memory Errors
Out of memory (OOM) errors typically happen when the server that is running the database can’t allocate any more memory to the database connections or its cache. Any number of issues could manifest themselves as an OOM error. Before bumping up your Heroku Postgres plan to a higher level, explore all areas of diagnosing the problem.
Extremely Complex Querying
Joins, sorting, and hash-based operations put pressure on the working memory in Postgres. For any given database, the number of concurrent connections can exacerbate this problem. Sort operations are defined by ORDER BY
, DISTINCT
, and merge joins. Hash-based operations are typically a result of processing IN
subqueries, hash-based aggregations, as well as hash joins. The more complex the query, the higher the probability that the query uses many more of these operations. Using the explain plan of your query or the Heroku Postgres expensive queries functionality is a good first step to understanding the bottlenecks of your queries.
Ruby, ActiveRecord, and Prepared Statements
Prepared statements, in and of themselves, aren’t a bad mechanism to use when working with Postgres. The benefit of using them is that the database can execute the statement with extremely high efficiency. Unfortunately, many ORMs as part of popular web frameworks, namely ActiveRecord in the Ruby on Rails framework, don’t construct prepared statements effectively. If a query that is constructed can contain any number of parameters, multiple prepared statements are created for what is the same logical query. For example, let’s assume that our application allows its users to select products from a product table by id. If customer one selects two products, ActiveRecord would define a query and create a prepared statement:
SELECT * FROM products WHERE id in ($1, $2);
If customer two, in the same application, for the same query, decides to select four products instead of two, a new prepared statement would be created:
SELECT * FROM products WHERE id in ($1, $2, $3, $4);
The problem with this code path in the application is our customers have some variability in how they can select data for the IN
clause. ActiveRecord parameterizes the query based on the number of items for the IN
clause. This can result in too many prepared statements getting cached, ultimately using too much memory on the system. If this is happening to your application, consider disabling prepared statements or adjusting the number of prepared statements that are cached.
Rails 4.1+
To disable prepared statements, update your config/database.yml
file to include the prepared_statements
parameter:
production:
adapter: postgresql
prepared_statements: false
To adjust the size of the prepared statement cache, update your config/database.yml
file to include the statement_limit
parameter (defaults to 1000).
production:
adapter: postgresql
statement_limit: 200
Rails 2, 3 and Rails 4.0
Disabling prepared statements in Rails 3 requires the use of initializers. Create a file called config/initializers/disable_prepared_statements.rb
. Within the file, disable prepared statements by updating the connection options:
ActiveRecord::Base.establish_connection(
ActiveRecord::Base.remove_connection.merge(
:prepared_statements => false
)
)
If you are using a multiprocess webserver you also must disable the prepared statements in your webserver after fork block.
Too Many Connections
While Heroku Postgres has connection limits, based on the plan type, they’re meant to be a guideline. Each connection within Postgres takes up some RAM and if too many are created at any given time, that also can cause problems for the database. It’s recommended that if you run into this situation, a production-grade connection pooler must be used to prevent the pressure of opening and closing connections on RAM.
Database Plan is Too Small
In some cases, the database plan selected is too small for the workloads placed on it. If all other avenues for clearing up OOM errors have been explored, the primary database might need to be upgraded up to the next plan level. Very well-engineered applications serve most of the query results from the cache that Postgres manages. Generally, if the cache hit ratio starts to suffer, based on the queries being executed, then a higher plan must be considered.