Here is my Gitlab Ansible script:
stages:
- linting
- deploy-s3
pylint:
stage: linting
script:
- TIMESTAMP=$(date -u +"%Y%m%d%H%M%S")
- REPORT_FILENAME="pylint_report_$TIMESTAMP.txt"
- find source/main/ -type f ! -name "__init__.py" 2>/dev/null | xargs pylint > $REPORT_FILENAME
- aws s3 cp $REPORT_FILENAME "s3://code/source/main/$REPORT_FILENAME"
allow_failure: true
bandit:
stage: linting
script:
- TIMESTAMP=$(date -u +"%Y%m%d%H%M%S")
- REPORT_FILENAME="bandit_report_$TIMESTAMP.txt"
- find source/main/ -type f ! -name "__init__.py" 2>/dev/null | xargs bandit > $REPORT_FILENAME
- aws s3 cp $REPORT_FILENAME "s3://code/source/main/$REPORT_FILENAME"
allow_failure: true
s3-sync:
stage: deploy-s3
script:
- if [ -d "source/main/" ]; then aws s3 sync source/main/ s3://source/main/$CI_PROJECT_PATH_SLUG/ --delete; else echo "Skipping source/main since it doesnt exist"; fi
only:
- main
I don't want the CICD pipeline to fail. Main goal is that the CICD pipeline uploads the pylint and bandit reports to s3 and move to the next deploy-s3 stage. I cannot save these quality reports locally. It has to be in a way that the result of pylint and bandit straight goes to s3 bucket as a file report. AWS part is working.
CICD pipeline is failing at linting stage because the pylint score is not 10/10, it throws ERROR: Job failed: exit status 1
I tried these changes as well to capture exit status and let it proceed but the pipeline still fails.
find source/main/ -type f ! -name "__init__.py" 2>/dev/null | xargs pylint > $REPORT_FILENAME | true
aws s3 cp $REPORT_FILENAME "s3://code/soource/main/$REPORT_FILENAME"
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745348497a4623681.html
评论列表(0条)