INTO is a fast-growing Canadian company at the forefront of generative AI innovation. We develop intelligent, conversational AI platforms that are deeply integrated into our clients’ operations — from backend APIs to full-stack GenAI solutions.
We work across multiple projects and industries, delivering scalable AI infrastructure, automated workflows, and production-grade environments. Our engineering team is hands-on, collaborative, and focused on shipping smart, reliable systems.
We’re hiring a Senior DevOps Engineer to join our core team and support multiple engineering pods across AI-driven projects. In this role, you’ll work closely with the CTO and collaborate across engineering teams, taking full ownership of our infrastructure, CI/CD pipelines, and observability stack.
You’ll lead the setup and management of staging/production environments, CI/CD workflows (GitHub Actions), and support both GCP and AWS deployments depending on the project. You’ll also be responsible for centralizing logs, replacing third-party tools like Sentry with internal solutions, and implementing robust monitoring and alerting systems.
This is a cross-functional role — you will collaborate with all teams to ensure fast, stable, and secure platform delivery.
Design, deploy, and manage infrastructure across GCP and AWS (staging & production).
Own and improve CI/CD pipelines (GitHub Actions) across multiple projects.
Automate deployment of GenAI models and RAG pipelines in scalable environments.
Set up and manage monitoring, logging, and alerting systems (e.g. Prometheus, Loki, Grafana).
Ensure system reliability, security, and cost optimization.
Manage container orchestration with Docker, Docker Compose, Kubernetes, and Helm.
Support deployment and optimization of vector databases and async worker queues.
Implement cloud networking configurations (VPCs, peering, ingress/egress, subnets).
Apply Infrastructure-as-Code practices using Terraform.
Handle secrets management (GCP Secret Manager, AWS Secrets Manager) and access control.
Debug build pipelines and deployments using kubectl, eksctl, gcloud, and CI tools.
Lead DevOps strategy, offer support and best practices to engineering pods.
Participate in architecture reviews and technical decision-making with the CTO.
5+ years of experience in DevOps, SRE, or cloud infrastructure roles.
Strong proficiency in Python for scripting and automation.
Deep experience with GCP and AWS.
Solid hands-on experience with Docker, Kubernetes, Helm, and Docker Compose.
Proficiency with Infrastructure-as-Code (Terraform).
Expertise in CI/CD tooling, multi-project pipelines, and environment isolation.
Experience with observability tools: Prometheus, Grafana, Loki.
Familiarity with security tools: SonarQube, Trivy.
Message queue management (Celery, Redis, RabbitMQ).
Production-grade Kubernetes ops (GKE Autopilot or EKS).
Advanced networking: VPCs, peering, ingress setups.
Cost estimation and cloud optimization techniques.
This is a high-autonomy role — you will lead all DevOps infrastructure decisions and implementations across the company.
Define standards, choose the right tools, and proactively identify areas to optimize or refactor.
Full ownership — no micromanagement, just impact.
Experience with vector databases and GenAI architecture (RAG, LLMOps).
Experience with serverless (AWS Lambda, Google Cloud Functions).
Holding a Google Cloud or AWS DevOps certification is a strong plus.
Impact Across Projects: Your work will support multiple product teams building AI solutions.
High Autonomy: Work directly with the CTO and shape our DevOps strategy.
Modern Stack: GCP, AWS, Docker, Kubernetes, GitHub Actions, vector DBs.
Remote Flexibility: Work from anywhere in North Africa.
Tech-Driven Culture: We value simplicity, automation, and code quality.
Competitive Compensation: We reward experience, autonomy, and initiative.