I am running a DBT test that detects around 1250 failing rows, but the failure table in Snowflake remains empty.
The DBT pipeline runs successfully every day at 06:00, and I manually ran dbt test -s test_unit4_missing_time --store-failures
as well.
Despite DBT detecting failures, the failure table (test_unit4_missing_time
) in Snowflake is not storing any data.
The table is not being recreated by DBT during runs, even though DBT is writing to the correct schema and the test is configured with store_failures=True.
Initially, the failure table was a transient table, but I replaced it with a permanent table (CREATE OR REPLACE TABLE) to prevent potential loss of data.
However, after making the table permanent, the failure table now updates its last_altered timestamp but remains empty after every DBT run.
I have tried to:
Checked if DBT attempted to store failures Ran:
dbt test -s test_unit4_missing_time --store-failures --debug
Expected to see an INSERT INTO dbt_dev.test_unit4_missing_time statement, but DBT did not attempt to insert anything.
Checked if DBT was overwriting instead of appending failures Updated dbt_project.yml to:
tests:
+store_failures: true
+on_schema_change: append_new_columns
Expected this to preserve old failures and store new ones, but no change.
Checked if Snowflake permissions were blocking inserts Ran:
SHOW GRANTS ON TABLE dbt_dev.test_unit4_missing_time;
Confirmed that the DBT user has INSERT, UPDATE, DELETE permissions.
Checked if DBT was storing failures in a different schema Ran:
SELECT table_schema, table_name, last_altered
FROM information_schema.tables
WHERE table_name LIKE 'dbt_test__%';
The failure table is in the expected schema (dbt_dev).
Disabled Snowflake query caching Ran:
ALTER SESSION SET USE_CACHED_RESULT = FALSE;
SELECT * FROM dbt_dev.test_unit4_missing_time;
Still, no rows were stored in the failure table.
Dropped the failure table and let DBT recreate it Ran:
DROP TABLE dbt_dev.test_unit4_missing_time;
dbt test -s test_unit4_missing_time --store-failures
Expected DBT to create and populate the failure table, but DBT did not create a new table, even though the test still reported 1250 failing rows.
If someone has encountered a similar issue with DBT tests not persisting failures in Snowflake, I’d appreciate any insights!
Could there be some internal DBT behavior preventing the failure table from updating?
I am running a DBT test that detects around 1250 failing rows, but the failure table in Snowflake remains empty.
The DBT pipeline runs successfully every day at 06:00, and I manually ran dbt test -s test_unit4_missing_time --store-failures
as well.
Despite DBT detecting failures, the failure table (test_unit4_missing_time
) in Snowflake is not storing any data.
The table is not being recreated by DBT during runs, even though DBT is writing to the correct schema and the test is configured with store_failures=True.
Initially, the failure table was a transient table, but I replaced it with a permanent table (CREATE OR REPLACE TABLE) to prevent potential loss of data.
However, after making the table permanent, the failure table now updates its last_altered timestamp but remains empty after every DBT run.
I have tried to:
Checked if DBT attempted to store failures Ran:
dbt test -s test_unit4_missing_time --store-failures --debug
Expected to see an INSERT INTO dbt_dev.test_unit4_missing_time statement, but DBT did not attempt to insert anything.
Checked if DBT was overwriting instead of appending failures Updated dbt_project.yml to:
tests:
+store_failures: true
+on_schema_change: append_new_columns
Expected this to preserve old failures and store new ones, but no change.
Checked if Snowflake permissions were blocking inserts Ran:
SHOW GRANTS ON TABLE dbt_dev.test_unit4_missing_time;
Confirmed that the DBT user has INSERT, UPDATE, DELETE permissions.
Checked if DBT was storing failures in a different schema Ran:
SELECT table_schema, table_name, last_altered
FROM information_schema.tables
WHERE table_name LIKE 'dbt_test__%';
The failure table is in the expected schema (dbt_dev).
Disabled Snowflake query caching Ran:
ALTER SESSION SET USE_CACHED_RESULT = FALSE;
SELECT * FROM dbt_dev.test_unit4_missing_time;
Still, no rows were stored in the failure table.
Dropped the failure table and let DBT recreate it Ran:
DROP TABLE dbt_dev.test_unit4_missing_time;
dbt test -s test_unit4_missing_time --store-failures
Expected DBT to create and populate the failure table, but DBT did not create a new table, even though the test still reported 1250 failing rows.
If someone has encountered a similar issue with DBT tests not persisting failures in Snowflake, I’d appreciate any insights!
Could there be some internal DBT behavior preventing the failure table from updating?
Share Improve this question asked Mar 11 at 15:01 Aagaard Aagaard 11 bronze badge1 Answer
Reset to default 0I have now changed the setting in dbt_project.yml
from:
tests:
+store_failures: false
tests:
+store_failures: true
After making this change, the test failure table is now being pushed to Snowflake, and it includes all failures from both my singular test and generic tests. This make sense, but I'm interested in storing my singular tests.
I still don't fully understand why setting store_failures: false
at the project level prevented even explicitly configured tests (store_failures=True
) from storing failures. If anyone can clarify this behavior in DBT, I’d appreciate it!
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744786099a4593637.html
评论列表(0条)