python 3.x - How to process or save job status details after a successful execution of a Step Function - Stack Overflow

Here is the rough code. It is written in Python using aws cdkcomplete_storage_fees_definition = (step_

Here is the rough code. It is written in Python using aws cdk

complete_storage_fees_definition = (
        step_functions.Parallel(self, "TryHandlerStorageFees", output_path="$.[0]")
        .branch(
            step_functions.Choice(self, "Select Storage Fees Connector")
            .when(
                step_functions.Condition.string_matches(
                    "$.report_type", "normal_fee"
                ),
                amazon_connector.storage_fees_definition,
            )
            .when(
                step_functions.Condition.string_matches(
                    "$.report_type", "longterm_fee"
                ),
                amazon_connector.longterm_storage_fees_definition,
            )
            .otherwise(
                step_functions.Fail(
                    self,
                    "wrong storage fees connector",
                    cause="Connector not found",
                ),
            )
        )
        .add_catch(
            (
                reporting_connector.send_email_alert_failure_definition(
                    "Storage-Fees"
                )
            ).next(reporting_connector.pipeline_failure_definition("Storage-Fees"))
        )
    )


storage_fees_state_machine = create_state_machine(
        self,
        "storage_fees_state_machine",
        log_group,
        complete_storage_fees_definition,
        "storage-fees",
        timeout_hr=3,
        common_role=role,
    )



def create_state_machine(
        scope,
        state_machine_name,
        log_group,
        definition,
        state_machine_resource_name,
        timeout_hr=1,
        max_attempts=2,
        interval_sec=120,
        backoff_rate=5,
        common_role=None,
        is_tracing_enabled=False,
):
return step_functions.StateMachine(
    scope,
    state_machine_name,
    logs=step_functions.LogOptions(
        destination=log_group, level=step_functions.LogLevel.ALL
    ),
    tracing_enabled=is_tracing_enabled,
    definition=definition.add_retry(
        max_attempts=max_attempts,
        interval=Duration.seconds(interval_sec),
        backoff_rate=backoff_rate,
    ),
    timeout=Duration.hours(timeout_hr),
    state_machine_name=get_resource_name(state_machine_resource_name),
    role=common_role,
)

Instead of capturing for each pipeline (adding another lambda at the end of the last executable lambda), is there a way to capture at state machine wise like how we do to handle failure events using add_catch block.

Thanks.

I tried with .next() before handling failures using .add_catch, it is making the definition into chain object which in return isn't having retry param while creating the state machine using this definition. Could you please help me how will I be coming to know once the pipeline got successfully ran without using any extra lambda, so that i can process the status and store it in db by using some other lambda function.

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1742417782a4440077.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信