Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ente - Museum/Postgres Error "panic: dial tcp [::1]:5432: connect: connection refused" #128

Closed
etuckeriv opened this issue Feb 8, 2025 · 2 comments

Comments

@etuckeriv
Copy link

etuckeriv commented Feb 8, 2025

Hello! I'm following the tutorial here - https://youtu.be/Gu-zAxAOn1E?si=BHXucdGrojOcyS7K for self-hosting Ente behind Traefik.

A note that's probably important - I'm running Fedora Server 41 with SELinux, and I'm using rootless Podman. I am also using http-external and https-external Traefik entrypoints as outlined in this tutorial - https://youtu.be/IBlZgrwc1T8?si=oMAYwdyj-mHVXhDB

I have followed the Ente tutorial up to the point where we spin up the containers with the compose file. When I run "podman compose up -d", everything builds and the compose file completes without any errors, but when I check the containers only minio and postgres are running (postgres reports as "healthy" as well), but the museum container (and subsequently the socat container) have exited. Socat exited because it couldn't find the museum container, but the museum container exited because the connection to postgres was refused.

In the museum container logs I see:

INFO[0000]main.go:103 main Booting up local server with commit #8656f698c0c66cf1573c2c000b81d2d93c73c69c 

INFO[0000]main.go:838 setupDatabase Setting up db                                

INFO[0000]main.go:845 setupDatabase Connected to DB                              

panic: dial tcp [::1]:5432: connect: connection refused



goroutine 1 [running]:

main.setupDatabase()

        /etc/ente/cmd/museum/main.go:848 +0x338

main.main()

        /etc/ente/cmd/museum/main.go:123 +0x44c

Streaming disconnected

In the Postgres container logs I see:

The database cluster will be initialized with locale "en_US.utf8".

The default database encoding has accordingly been set to "UTF8".

The default text search configuration will be set to "english".



Data page checksums are disabled.



fixing permissions on existing directory /var/lib/postgresql/data ... ok

creating subdirectories ... ok

selecting dynamic shared memory implementation ... posix

selecting default max_connections ... 100

selecting default shared_buffers ... 128MB

selecting default time zone ... Etc/UTC

creating configuration files ... ok

running bootstrap script ... ok

performing post-bootstrap initialization ... ok

syncing data to disk ... ok





Success. You can now start the database server using:



    pg_ctl -D /var/lib/postgresql/data -l logfile start



initdb: warning: enabling "trust" authentication for local connections

initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.

waiting for server to start....2025-02-08 23:33:54.868 UTC [46] LOG:  starting PostgreSQL 15.10 (Debian 15.10-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2025-02-08 23:33:54.869 UTC [46] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2025-02-08 23:33:54.875 UTC [49] LOG:  database system was shut down at 2025-02-08 23:33:54 UTC

2025-02-08 23:33:54.880 UTC [46] LOG:  database system is ready to accept connections

 done

server started

CREATE DATABASE





/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*



waiting for server to shut down....2025-02-08 23:33:55.083 UTC [46] LOG:  received fast shutdown request

2025-02-08 23:33:55.086 UTC [46] LOG:  aborting any active transactions

2025-02-08 23:33:55.099 UTC [46] LOG:  background worker "logical replication launcher" (PID 52) exited with exit code 1

2025-02-08 23:33:55.099 UTC [47] LOG:  shutting down

2025-02-08 23:33:55.102 UTC [47] LOG:  checkpoint starting: shutdown immediate

2025-02-08 23:33:55.157 UTC [47] LOG:  checkpoint complete: wrote 918 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.031 s, sync=0.021 s, total=0.058 s; sync files=301, longest=0.004 s, average=0.001 s; distance=4222 kB, estimate=4222 kB

2025-02-08 23:33:55.165 UTC [46] LOG:  database system is shut down

 done

server stopped



PostgreSQL init process complete; ready for start up.



2025-02-08 23:33:55.218 UTC [1] LOG:  starting PostgreSQL 15.10 (Debian 15.10-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2025-02-08 23:33:55.218 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432

2025-02-08 23:33:55.218 UTC [1] LOG:  listening on IPv6 address "::", port 5432

2025-02-08 23:33:55.223 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2025-02-08 23:33:55.230 UTC [62] LOG:  database system was shut down at 2025-02-08 23:33:55 UTC

2025-02-08 23:33:55.240 UTC [1] LOG:  database system is ready to accept connections

Here are my config files:
docker-compose

services:
  museum:
    # Uncomment below if you prefer to build
    #build:
      #context: .
      #args:
        #GIT_COMMIT: development-cluster
    image: ghcr.io/ente-io/server
    #ports:
    #  - 8080:8080 # API
    #  - 2112:2112 # Prometheus metrics
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      # Pass-in the config to connect to the DB and MinIO
      ENTE_CREDENTIALS_FILE: /credentials.yaml
     # ENTE_CLI_SECRETS_PATH: /cli-data/secret.txt
     # ENTE_CLI_CONFIG_PATH: /cli-data/
    volumes:
      - ~/podman_volumes/ente/custom-logs:/var/logs:z
      - ~/podman_volumes/ente/museum.yaml:/museum.yaml:Z,ro
      - ~/podman_volumes/ente/scripts/compose/credentials.yaml:/credentials.yaml:z,ro
      #- ~/podman_volumes/ente/cli-data:/cli-data
     # - ~/podman_volumes/ente/exports/ente-photos:/exports
      - ~/podman_volumes/ente/data:/data:z,ro
    networks:
      - ente
      - proxy
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=proxy"
      - "traefik.http.routers.ente.entrypoints=http-external"
      - "traefik.http.routers.ente.rule=Host(`ente.tuckerhome.io`)"
      - "traefik.http.middlewares.ente-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.ente.middlewares=ente-https-redirect"
      - "traefik.http.routers.ente-secure.entrypoints=https-external"
      - "traefik.http.routers.ente-secure.rule=Host(`ente.tuckerhome.io`)"
      - "traefik.http.routers.ente-secure.tls=true"
      - "traefik.http.routers.ente-secure.tls.certresolver=cloudflare"
      - "traefik.http.routers.ente-secure.service=ente"
      - "traefik.http.services.ente.loadbalancer.server.port=8080" # make sure the loadbalancer is the last line!!!
      # Configure CORS middleware if needed
      - "traefik.http.middlewares.ente-secure-cors.headers.accesscontrolallowmethods=GET,HEAD,POST,PUT,DELETE"
      - "traefik.http.middlewares.ente-secure-cors.headers.accesscontrolallowheaders=*"
      - "traefik.http.middlewares.ente-secure-cors.headers.accesscontrolalloworiginlist=https://ente.tuckerhome.io,https://minio.tuckerhome.io"  # Add other origins if needed
      - "traefik.http.middlewares.ente-secure-cors.headers.accesscontrolmaxage=3000"
      - "traefik.http.middlewares.ente-secure-cors.headers.accessControlExposeHeaders=ETag"
      - "traefik.http.middlewares.ente-secure-cors.headers.addvaryheader=true"
      - "traefik.http.routers.ente-secure.middlewares=ente-secure-cors"


#  # Resolve "localhost:3200" in the museum container to the minio container.
  socat:
    image: alpine/socat
    network_mode: service:museum
    depends_on:
      - museum
    command: "TCP-LISTEN:3200,fork,reuseaddr TCP:minio:3200"

  postgres:
    image: postgres:15
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: pguser
      POSTGRES_PASSWORD: pgpass
      POSTGRES_DB: ente_db
    # Wait for postgres to be accept connections before starting museum.
    healthcheck:
      test:
        [
          "CMD",
          "pg_isready",
          "-q",
          "-d",
          "ente_db",
          "-U",
          "pguser"
        ]
      start_period: 40s
      start_interval: 1s
    volumes:
      - ~/podman_volumes/ente/postgres-data:/var/lib/postgresql/data:z
    networks:
      - ente

  minio:
    image: minio/minio
    # Use different ports than the minio defaults to avoid conflicting
    # with the ports used by Prometheus.
    ports:
      - 3200:3200 # API
      - 3201:3201 # Console
    environment:
      MINIO_ROOT_USER: test
      MINIO_ROOT_PASSWORD: testtest
      MINIO_SERVER_URL: https://minio.tuckerhome.io
    command: server /data --address ":3200" --console-address ":3201"
    volumes:
      - ~/podman_volumes/ente/minio-data:/data:z
    networks:
      - ente
      - proxy
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=proxy"
      - "traefik.http.routers.minio.entrypoints=http-external"
      - "traefik.http.routers.minio.rule=Host(`minio.tuckerhome.io`)"
      - "traefik.http.middlewares.minio-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.minio.middlewares=minio-https-redirect"
      - "traefik.http.routers.minio-secure.entrypoints=https-external"
      - "traefik.http.routers.minio-secure.rule=Host(`minio.tuckerhome.io`)"
      - "traefik.http.routers.minio-secure.tls=true"
      - "traefik.http.routers.minio-secure.tls.certresolver=cloudflare"
      - "traefik.http.routers.minio-secure.service=minio"
      - "traefik.http.services.minio.loadbalancer.server.port=3200"

  minio-provision:
    image: minio/mc
    depends_on:
      - minio
    volumes:
      - ~/podman_volumes/ente/scripts/compose/minio-provision.sh:/provision.sh:Z,ro
      - ~/podman_volumes/ente/minio-data:/data:z
    networks:
      - ente
    entrypoint: sh /provision.sh

networks:
  ente:
  proxy:
    external: true

museum

# HTTP connection parameters
http:
    # If true, bind to 443 and use TLS.
    # By default, this is false, and museum will bind to 8080 without TLS.
    # use-tls: true

# Specify the base endpoints for various apps
apps:
    # Default is https://albums.ente.io
    #
    # If you're running a self hosted instance and wish to serve public links,
    # set this to the URL where your albums web app is running.
    public-albums: https://ente.tuckerhome.io

# SMTP configuration (optional)
#
# Configure credentials here for sending mails from museum (e.g. OTP emails).
#
# The smtp credentials will be used if the host is specified. Otherwise it will
# try to use the transmail credentials. Ideally, one of smtp or transmail should
# be configured for a production instance.
#
# username and password are optional (e.g. if you're using a local relay server
# and don't need authentication).
#smtp:
#    host:
#    port:
#    username:
#    password:
#    # The email address from which to send the email. Set this to an email
#    # address whose credentials you're providing.
#    email:

s3:
    are_local_buckets: true
    b2-eu-cen:
        key: test
        secret: testtest
        endpoint: https://minio.tuckerhome.io
        region: eu-central-2
        bucket: b2-eu-cen
        
# Add this once you have done the CLI part
#internal:
#    admins:
#        - 1580559962386438

credentials

db:
    host: postgres
    port: 5432
    name: ente_db
    user: pguser
    password: pgpass

s3:
    are_local_buckets: true
    b2-eu-cen:
        key: test
        secret: testtest
        endpoint: https://minio.tuckerhome.io
        region: eu-central-2
        bucket: b2-eu-cen
    wasabi-eu-central-2-v3:
        key: test
        secret: testtest
        endpoint: localhost:3200
        region: eu-central-2
        bucket: wasabi-eu-central-2-v3
        compliance: false
    scw-eu-fr-v3:
        key: test
        secret: testtest
        endpoint: localhost:3200
        region: eu-central-2
        bucket: scw-eu-fr-v3

minio-provision

#!/bin/sh

# Script used to prepare the minio instance that runs as part of the development
# Docker compose cluster.

while ! mc config host add h0 http://minio:3200 test testtest
do
   echo "waiting for minio..."
   sleep 0.5
done

cd /data

mc mb -p b2-eu-cen
mc mb -p wasabi-eu-central-2-v3
mc mb -p scw-eu-fr-v3

Initially I changed the usernames and passwords, but after it didn't work I left them all default and it still doesn't work. I realize this is probably a realitively unknown setup, but I know Jim has used Podman before, and I have added the required SELinux tags to the volume entries (that's what the "z" is for, if you're wondering) but admittedly I'm not totally sure they're necessary or correctly implemented. I have tried setting SELinux to passive mode and re-deploying the containers, but I still get the error.

I'm learning Podman as I go, and my assumption is there is maybe an issue with the "rootless" nature of Podman? I have successfully gotten Immich setup in this environment, but I like Ente better so I wanted to get it running instead. Any guidance is appreciated!

@etuckeriv
Copy link
Author

I figured this out. In the compose file, the volumes weren't mapped correctly for the container to find museum.yaml and credentials.yaml. In @JamesTurland's folder structure they are in the ./config directory, but in the compose file they're mounted to the ./ente directory. So you can either move the files, or update the volume mounts so museum can find the files.

@etuckeriv
Copy link
Author

Created a pull request - #129 (comment)

I've never done this before, hopefuly I did it right and apologies if I didn't!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant