BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CD Foundation - ECPv6.15.17.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CD Foundation
X-ORIGINAL-URL:https://cd.foundation
X-WR-CALDESC:Events for CD Foundation
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250828T120000
DTEND;TZID=America/New_York:20250828T130000
DTSTAMP:20260404T190122
CREATED:20250819T120614Z
LAST-MODIFIED:20250819T120658Z
UID:15550-1756382400-1756386000@cd.foundation
SUMMARY:Workshop | DevOps for Data: Delivering and Orchestrating Apache Spark on Containers
DESCRIPTION:Data teams ship critical workloads\, but Spark jobs often live outside the DevOps/CI/CD guardrails. This session shows how to bring Continuous Delivery discipline to Apache Spark on container-orchestrated platforms (with Kubernetes as a concrete example). We’ll cover how to package Spark apps as immutable artifacts\, add automated quality gates (code\, dependency\, and data tests)\, and promote jobs through environments using pipeline-as-code. \nWe will discuss the end-to-end flow: commit → CI build → artifact + test → CD submit to a local container orchestrator → run/observe/roll back. We’ll close with a production checklist for platform teams (multi-tenant quotas\, secrets\, cost controls\, and supply-chain security) and share a template repo you can adapt. If you’re a DevOps\, platform\, or data engineer looking to make Spark delivery as robust as your apps and services\, this is your fast on-ramp. \nKey Takeaways: \n\nCD blueprint for Spark: From commit → artifact (image/jar/wheel) → automated checks → environment promotion → safe rollouts (Jobs/CronJobs/SparkApplication) with rollback strategies.\nQuality gates for code and data: Unit + integration tests\, schema/contract checks\, lightweight data validations; include dependency scanning\, SBOM\, and signatures.\nCDEvents for orchestration: Event-driven pipelines and notifications across CI\, registry\, and runtime; traceability from build to execution.\nPlatform guardrails: Namespaces & quotas\, secrets management\, cost controls\, multi-tenancy patterns\, and operational SLOs for batch jobs.\n\nRSVP for free
URL:https://cd.foundation/event/workshop-devops-for-data-delivering-and-orchestrating-apache-spark-on-containers/
LOCATION:Virtual
CATEGORIES:Continuous Delivery Foundation
ATTACH;FMTTYPE=image/png:https://cd.foundation/wp-content/uploads/sites/35/2025/08/CDF-Workshops-Spark-Kubernetes.png
ORGANIZER;CN="Continuous Delivery Foundation":MAILTO:info@cd.foundation
END:VEVENT
END:VCALENDAR