GitLab CI Docker Compose: ConnectionError When Accessing FastAPI Service - Stack Overflow

ProblemI am trying to setup integration tests in my GitLab CI pipeline. For that, I use docker compose

Problem

I am trying to setup integration tests in my GitLab CI pipeline. For that, I use docker compose to spin up a container exposing fastapi http endpoints and use a series of pytest unittests to query these endpoints. When I run the container on my local machine and execute pytest tests, everything works. When ran in the GitLab pipeline, the container spins up just fine, but all tests fail with

FAILED tests/test_endpoints.py::test_video_upload - requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /video_upload (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd392a9a5a0>: Failed to establish a new connection: [Errno 111] Connection refused'))

Project Setup

docker-compose.yml:

services:
  fitai-pro-service:
    container_name: fitai-pro
    build: .
    image: tabeqc/fitai-pro
    volumes:
      - .:/app
    ports:
      - "8000:8000"
    command: >
      uvicorn api:app --host 0.0.0.0 --port 8000

.gitlab-ci.yml:

stages:
  - test

test:
  stage: test

  image:
    name: docker:latest

  services:
    - name: docker:dind

  variables:
    DOCKER_COMPOSE_FILE: "app/docker-compose.yml"
    DOCKER_HOST: "tcp://docker:2375"
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""

  before_script:
    - apk add --no-cache python3 py3-pip
    - python3 -m venv /venv
    - source /venv/bin/activate
    - pip install --upgrade pip
    - pip install -r tests/requirements.txt
    - docker-compose -f $DOCKER_COMPOSE_FILE up -d
    - docker exec fitai-pro curl -v http://localhost:8000/health || exit 1

  script:
    - pytest tests

  after_script:
    - docker-compose -f $DOCKER_COMPOSE_FILE down

Dockerfile:

FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && \
    apt-get install -y \
        curl \
        libgl1 \
        libgl1-mesa-glx \
        libglib2.0-0 && \
    rm -rf /var/lib/apt/lists/*
COPY . .
RUN pip install --upgrade -r requirements.txt

part of tests.py:

URL = "http://localhost:8000"
def test_video_upload(sample_video):
    url = f"{URL}/video_upload"
    with open(sample_video, "rb") as video_file:
        response = requests.post(url, files={"video_file": video_file})

    assert response.status_code == 200
    assert "video_path" in response.json()
    os.unlink(sample_video)

What I have tried

No matter what I set the host part of URL to (localhost, docker, fitai-pro etc.), the result is the same. Also, docker exec fitai-pro curl -v http://localhost:8000/health works fine locally, but also returns a Connection refused error in the pipeline.

Any leads are very much appreciated.

Problem

I am trying to setup integration tests in my GitLab CI pipeline. For that, I use docker compose to spin up a container exposing fastapi http endpoints and use a series of pytest unittests to query these endpoints. When I run the container on my local machine and execute pytest tests, everything works. When ran in the GitLab pipeline, the container spins up just fine, but all tests fail with

FAILED tests/test_endpoints.py::test_video_upload - requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /video_upload (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd392a9a5a0>: Failed to establish a new connection: [Errno 111] Connection refused'))

Project Setup

docker-compose.yml:

services:
  fitai-pro-service:
    container_name: fitai-pro
    build: .
    image: tabeqc/fitai-pro
    volumes:
      - .:/app
    ports:
      - "8000:8000"
    command: >
      uvicorn api:app --host 0.0.0.0 --port 8000

.gitlab-ci.yml:

stages:
  - test

test:
  stage: test

  image:
    name: docker:latest

  services:
    - name: docker:dind

  variables:
    DOCKER_COMPOSE_FILE: "app/docker-compose.yml"
    DOCKER_HOST: "tcp://docker:2375"
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""

  before_script:
    - apk add --no-cache python3 py3-pip
    - python3 -m venv /venv
    - source /venv/bin/activate
    - pip install --upgrade pip
    - pip install -r tests/requirements.txt
    - docker-compose -f $DOCKER_COMPOSE_FILE up -d
    - docker exec fitai-pro curl -v http://localhost:8000/health || exit 1

  script:
    - pytest tests

  after_script:
    - docker-compose -f $DOCKER_COMPOSE_FILE down

Dockerfile:

FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && \
    apt-get install -y \
        curl \
        libgl1 \
        libgl1-mesa-glx \
        libglib2.0-0 && \
    rm -rf /var/lib/apt/lists/*
COPY . .
RUN pip install --upgrade -r requirements.txt

part of tests.py:

URL = "http://localhost:8000"
def test_video_upload(sample_video):
    url = f"{URL}/video_upload"
    with open(sample_video, "rb") as video_file:
        response = requests.post(url, files={"video_file": video_file})

    assert response.status_code == 200
    assert "video_path" in response.json()
    os.unlink(sample_video)

What I have tried

No matter what I set the host part of URL to (localhost, docker, fitai-pro etc.), the result is the same. Also, docker exec fitai-pro curl -v http://localhost:8000/health works fine locally, but also returns a Connection refused error in the pipeline.

Any leads are very much appreciated.

Share Improve this question edited Mar 25 at 21:36 Ulrich Eckhardt 17.5k5 gold badges31 silver badges60 bronze badges asked Mar 25 at 20:37 tabeqctabeqc 115 bronze badges 2
  • 1 One thing that's inconsistent is the use of docker-compose vs docker compose. The latter is the only thing you should be using in modern code. Another thing unnecessarily complicating this is that you use docker exec plus a hard-coded container name when you could probably use docker compose exec instead. – Ulrich Eckhardt Commented Mar 25 at 21:39
  • Thanks for your valuable remarks. Changing the pipeline to consistently use docker compose leads to the same outcome. Using docker compose exec seems to require a hard coded service name instead, which does not provide any benefit compared to docker exec . Do you have any other suggestions? – tabeqc Commented Mar 26 at 10:20
Add a comment  | 

2 Answers 2

Reset to default 0

I'm not very well-versed in FastAPI, but in Django this problem is solved by adding "docker" (or an alias) to ALLOWED_HOSTS. It seems that in FastAPI you can try to achieve this with Starlette’s TrustedHostMiddleware.

Turns out it was just a matter of waiting for the containers to finish starting up before attempting the tests. Using docker as hostname and adding a repeated health check to make sure the tests only begin when the container is ready solved the issue.

Example command in before_script that accomplishes this:

    - > 
      i=1; while [ $i -le 15 ]; do
        curl -v http://docker:8000/health && break || sleep 1;
        if [ $i -eq 15 ]; then exit 1; fi;
        i=$((i + 1));
      done

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744169649a4561472.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信